date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,690,865,303,000
I have two 1TB NVMe SSDs that was setup using LVM in a manner where the two drives were combined to make up (roughly) one single 2TB logical volume. The laptop that booted from these drives has been sent off for repairs and whether it can be fixed or not is uncertain. In the meantime, I'd like to recover the data and ...
You don't need a VM, if your second computer also runs Linux, then you just need to install LVM (package is usually called lvm2), if you don't have it already installed and connect both drives (I don't have personal experience with USB NVMe enclosures, but these look fine if you don't have an option to use an internal...
Recovering Data from 2 NVMe SSDs that Were Setup Using LVM
1,690,865,303,000
I've been spending some time working with self encrypting SSDs recently, and I am stuck on how to access drive contents after I've unlocked it. Normally with this drive, you would load a Preboot Authentication image on startup that would unlock a partition containing the OS, and you would see the unlocked partition in...
I don't have self-encrypting NVMe disks that I could test these commands with. But based on how SAN LUN partitions can be rescanned, at least one of these ways might work: echo 1 > /sys/class/nvme/nvme0/rescan_controller or partprobe /dev/nvme0n1
How do I get a self encrypting NVMe SSD partition to show up in /dev after unlocking it?
1,690,865,303,000
Having bought a used PC and now installing smartd on it, I'm getting smartd "Critical Warning (0x04): Reliability" emails about it (full pastebin). The Percentage Used: 112% is concerning. Is that enough for smartd to declare "Critical Warning (0x04): Reliability"? This message was generated by the smartd daemon runn...
Critical Warning is a bit field read directly from the device itself. smartmontools then only displays it to you... so you're looking for an interpretation that smartmontools itself doesn't do. Technically smartctl does not display this because of reason X or Y; the drive firmware sets the failure bit internally out o...
Is this drive dead?: Samsung SSD 970 EVO Plus 1TB
1,690,865,303,000
TLDL For the very simple sequential read, FIO reports is much slower than the NVMe SSD sequentail read capability. Main Text Hello everyone, I have been facing an issue while trying to achieve the maximum read bandwidth reported by the vendor for my Samsung 980 Pro 1T NVMe SSD. According to the Samsung product descri...
Consider increasing the test size or simply removing the limit. Using --size=1024m means you're targeting a specific range of NAND flash, which can limit the bandwidth. Opt for a time-based option and a smaller block size. By specifying --bs=1024m and the same size, you're essentially completing each loop with a sing...
FIO reports slower sequential read than the advertised NVMe SSD read bandwidth
1,690,865,303,000
On a HP ZBook G6 with two Kingston M.2 NVMe SSDs is Windows running without problems. But when booting the Debian 12 installer ISO from an USB hard disk, the Linux 6.0 kernel does not detect these NVMe disks but detects only the USB boot disk in /proc/partitions. I tried modprobe nvme-core nvme nvmet but the repeated ...
The problem was caused by the enabled but unused BIOS option Storage Controller for Intel Optane (there is no Optane memory in the system in question). Disabling this BIOS setting shows a warning about possible data loss and accepting this warning allows the NVMEs to be recognized, in fact without data loss. As long a...
Debian 12 installer does not detect NVMe disks
1,690,865,303,000
I recently picked up this lovely laptop (HP Spectre X360 13), and want to install Manjaro on it. It has a Toshiba NVMe SSD (KXG50ZNV1T02). Seeing that this will be my first install of Linux onto an SSD of any sort, I would just like the community's feedback regarding any special precaution I should be taking when inst...
Regarding SSD/NVMe, I strongly suggest you to read Archwiki pages [1] and [2]. This is the starting point, rather, much more! Talking about which filesystem to use for, take a look here [3] for example. For Thunderbolt, I have no experience with this hardware, so my only advice is to start from the Archwiki as well....
Installing Manjaro 18 on HP Spectre NVMe SSD
1,334,824,517,000
Right now, I know how to: find open files limit per process: ulimit -n count all opened files by all processes: lsof | wc -l get maximum allowed number of open files: cat /proc/sys/fs/file-max My question is: Why is there a limit of open files in Linux?
The reason is that the operating system needs memory to manage each open file, and memory is a limited resource - especially on embedded systems. As root user you can change the maximum of the open files count per process (via ulimit -n) and per system (e.g. echo 800000 > /proc/sys/fs/file-max).
Why is number of open files limited in Linux?
1,334,824,517,000
How does one find large files that have been deleted but are still open in an application? How can one remove such a file, even though a process has it open? The situation is that we are running a process that is filling up a log file at a terrific rate. I know the reason, and I can fix it. Until then, I would like to...
If you can't kill your application, you can truncate instead of deleting the log file to reclaim the space. If the file was not open in append mode (with O_APPEND), then the file will appear as big as before the next time the application writes to it (though with the leading part sparse and looking as if it contained ...
Find and remove large files that are open but have been deleted
1,334,824,517,000
Sometimes, I would like to unmount a usb device with umount /run/media/theDrive, but I get a drive is busy error. How do I find out which processes or programs are accessing the device?
Use lsof | grep /media/whatever to find out what is using the mount. Also, consider umount -l (lazy umount) to prevent new processes from using the drive while you clean up.
How do I find out which processes are preventing unmounting of a device?
1,334,824,517,000
What is the difference between hard and soft limits in ulimit? For number of open files, I have a soft limit of 1024 and a hard limit of 10240. It is possible to run programs opening more than 1024 files. What is the soft limit for?
A hard limit can only be raised by root (any process can lower it). So it is useful for security: a non-root process cannot overstep a hard limit. But it's inconvenient in that a non-root process can't have a lower limit than its children. A soft limit can be changed by the process at any time. So it's convenient as l...
ulimit: difference between hard and soft limits
1,334,824,517,000
I am running in an interactive bash session. I have created some file descriptors, using exec, and I would like to list what is the current status of my bash session. Is there a way to list the currently open file descriptors?
Yes, this will list all open file descriptors: $ ls -l /proc/$$/fd total 0 lrwx------ 1 isaac isaac 64 Dec 28 00:56 0 -> /dev/pts/6 lrwx------ 1 isaac isaac 64 Dec 28 00:56 1 -> /dev/pts/6 lrwx------ 1 isaac isaac 64 Dec 28 00:56 2 -> /dev/pts/6 lrwx------ 1 isaac isaac 64 Dec 28 00:56 255 -> /dev/pts/6 l-wx------ 1 i...
How to list the open file descriptors (and the files they refer to) in my current bash session
1,334,824,517,000
I want to determine which process has the other end of a UNIX socket. Specifically, I'm asking about one that was created with socketpair(), though the problem is the same for any UNIX socket. I have a program parent which creates a socketpair(AF_UNIX, SOCK_STREAM, 0, fds), and fork()s. The parent process closes fds...
Since kernel 3.3, it is possible using ss or lsof-4.89 or above — see Stéphane Chazelas's answer. In older versions, according to the author of lsof, it was impossible to find this out: the Linux kernel does not expose this information. Source: 2003 thread on comp.unix.admin. The number shown in /proc/$pid/fd/$fd is ...
Who's got the other end of this unix socketpair?
1,334,824,517,000
I know I can view the open files of a process using lsof at that moment in time on my Linux machine. However, a process can open, alter and close a file so quickly that I won't be able to see it when monitoring it using standard shell scripting (e.g. watch) as explained in "monitor open process files on linux (real-ti...
Running it with strace -e trace=open,openat,close,read,write,connect,accept your-command-here would probably be sufficient. You'll need to use the -o option to put strace's output somewhere other than the console, if the process can print to stderr. If your process forks, you'll also need -f or -ff. Oh, and you might...
How do I monitor opened files of a process in realtime?
1,334,824,517,000
I am trying to understand named pipes in the context of this particular example. I type <(ls -l) in my terminal and get the output as, bash: /dev/fd/63: Permission denied. If I type cat <(ls -l), I could see the directory contents. If I replace the cat with echo, I think I get the terminal name (or is it?). echo <(ls ...
When you do <(some_command), your shell executes the command inside the parentheses and replaces the whole thing with a file descriptor, that is connected to the command's stdout. So /dev/fd/63 is a pipe containing the output of your ls call. When you do <(ls -l) you get a Permission denied error, because the whole li...
Why does process substitution result in a file called /dev/fd/63 which is a pipe?
1,334,824,517,000
A file is being sequentially downloaded by wget. If I start unpacking it with cat myfile.tar.bz2 | tar -xj, it may unpack correctly or fail with "Unexpected EOF", depending on what is faster. How to "cat and follow" a file, i.e. output content of the file to stdout, but don't exit on EOF, instead keep subsribed to tha...
tail +1f file I tested it on Ubuntu with the LibreOffice source tarball while wget was downloading it: tail +1f libreoffice-4.2.5.2.tar.xz | tar -tvJf - It also works on Solaris 10, RHEL3, AIX 5 and Busybox 1.22.1 in my Android phone (use tail +1 -f file with Busybox).
How do I "cat and follow" a file?
1,334,824,517,000
I need a command that will wait for a process to start accepting requests on a specific port. Is there something in linux that does that? while (checkAlive -host localhost -port 13000 == false) do some waiting ...
The best test to see if a server is accepting connections is to actually try connecting. Use a regular client for whatever protocol your server speaks and try a no-op command. If you want a lightweight TCP or UDP client you can drive simply from the shell, use netcat. How to program a conversation depends on the proto...
How do I tell a script to wait for a process to start accepting requests on a port?
1,334,824,517,000
In Linux, in /proc/PID/fd/X, the links for file descriptors that are pipes or sockets have a number, like: l-wx------ 1 user user 64 Mar 24 00:05 1 -> pipe:[6839] l-wx------ 1 user user 64 Mar 24 00:05 2 -> pipe:[6839] lrwx------ 1 user user 64 Mar 24 00:05 3 -> socket:[3142925] lrwx------ 1 user user 64 Mar 24 00:05 ...
That's the inode number for the pipe or socket in question. A pipe is a unidirectional channel, with a write end and a read end. In your example, it looks like FD 5 and FD 6 are talking to each other, since the inode numbers are the same. (Maybe not, though. See below.) More common than seeing a program talking to its...
/proc/PID/fd/X link number
1,334,824,517,000
Hi I have many files that have been deleted but for some reason the disk space associated with the deleted files is unable to be utilized until I explicitly kill the process for the file taking the disk space $ lsof /tmp/ COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME cron 1623 root 5u REG 0,...
On unices, filenames are just pointers (inodes) that point to the memory where the file resides (which can be a hard drive or even a RAM-backed filesystem). Each file records the number of links to it: the links can be either the filename (plural, if there are multiple hard links to the same file), and also every time...
Best way to free disk space from deleted files that are held open
1,334,824,517,000
My problem is that with lsof -p pid I can find out the list of opened file of a process whose process id is pid. But is there a way to find out the file offset of each accessed file ? Please give me some suggestions ?
On linux, you can find the position of the file descriptor number N of process PID in /proc/$PID/fdinfo/$N. Example: $ cat /proc/687705/fdinfo/36 pos: 26088 flags: 0100001 The same file can be opened several times with different positions using several file descriptors, so you'll have to choose the relevant one i...
How to find out the file offset of an opened file?
1,334,824,517,000
Say I have process 1 and process 2. Both have a file descriptor corresponding to the integer 4. In each process however the file descriptor 4 points to a totally different file in the Open File Table of the kernel: How is that possible? Isn't a file descriptor supposed to be the index to a record in the Open File Tab...
The file descriptor, i.e. the 4 in your example, is the index into the process-specific file descriptor table, not the open file table. The file descriptor entry itself contains an index to an entry in the kernel's global open file table, as well as file descriptor flags.
How can same fd in different processes point to the same file?
1,334,824,517,000
I have opened a dir vim some/dir. I can navigate within the tree, yet once I opened a file I wonder, how do I close the file view in order to go back to the directory listing to navigate to another file. :wq is no option, as it closes the whole vim session. I guess there is a for mode to that, yet I do not know what i...
How about :e .? This opens the current directory in Vim, i.e. it opens the file explorer. Because I have autochdir setting set, this shows the directory that the currently edited file is in.
How to switch to the directory listing from file view in vim?
1,334,824,517,000
I need to know if a process with a given PID has opened a port without using external commands. I must then use the /proc filesystem. I can read the /proc/$PID/net/tcp file for example and get information about TCP ports opened by the process. However, on a multithreaded process, the /proc/$PID/task/$TID directory wil...
I can read the /proc/$PID/net/tcp file for example and get information about TCP ports opened by the process. That file is not a list of tcp ports opened by the process. It is a list of all open tcp ports in the current network namespace, and for processes running in the same network namespace is identical to the co...
Read "/proc" to know if a process has opened a port
1,334,824,517,000
In many cases lsof is not installed on the machines that I have to work with, but the "function" of lsof would be needed very much (for example on AIX). :\ Are there any lsof like applications in the non-Windows world? For example, I need to know which processes use the /home/username directory?
I know of fuser, see if it's available on your system.
Alternatives for "lsof" command?
1,334,824,517,000
If I do a: echo foo > /dev/pts/12 Some process will read that foo\n from its file descriptor to the master side. Is there a way to find out what that(those) process(es) is(are)? Or in other words, how could I find out which xterm/sshd/script/screen/tmux/expect/socat... is at the other end of /dev/pts/12? lsof /dev/pt...
In 2017 Linux got a new feature which can simplify this process a bit (commit d01c3289e7d, available in Linux 4.14 and newer) After getting the list of processes with /dev/ptmx open: $ fuser dev/ptmx /dev/ptmx: 1330334 1507443 The pts number can be received like this: for pid in $(fuser /dev/ptmx 2>/dev/nul...
How can we know who's at the other end of a pseudo-terminal device?
1,334,824,517,000
If I start an app with this command: /path/to/my/command >> /var/log/command.log And the command doesn't return, is there a way, from another prompt, to see what the STDOUT redirect is set to? I'm looking for something like either cat /proc/PID/redirects or ps -??? | grep PID but any method will do.
Check out the file descriptor #1 (STDOUT) in /proc/$PID/fd/. The kernel represents this file as symbolic link to a file the descriptor is redirected to. $ readlink -f /proc/20361/fd/1 /tmp/file
See the STDOUT redirect of a running process
1,334,824,517,000
I have a node.js process that uses fs.appendFile to add lines to file.log. Only complete lines of about 40 chars per lines are appended, e.g. calls are like fs.appendFile("start-end"), not 2 calls like fs.appendFile("start-") and fs.appendFile("end"). If I move this file to file2.log can I be sure that no lines are lo...
As long as you don't move the file across file-system borders, the operation should be safe. This is due to the mechanism, how »moving« actually is done. If you mv a file on the same file-system, the file isn't actually touched, but only the file-system entry is changed. $ mv foo bar actually does something like $ l...
Is it safe to move a file that's being appended to?
1,334,824,517,000
I've always wondered this but never took the time to find out, so I'll do so now - how portable is the usage shown here of either /proc/$$/fd/$N or /dev/fd/$N? I understand POSIX guarantees /dev/null, /dev/tty, and /dev/console (though I only found that out the other day after reading the comments on this answer) but ...
The /proc/PID/fd/NUM symlinks are quasi-universal on Linux, but they don't exist anywhere else (except on Cygwin which emulates them). /proc/PID/fd/NUM also exist on AIX and Solaris, but they aren't symlinks. Portably, to get information about open files, install lsof. Unices with /proc/PID/fd Linux Under Linux, /proc...
Portability of file descriptor links
1,334,824,517,000
I have two instances of a process running. One of them is "frEAkIng oUT!" and printing errors non stop to STDOUT. I want to kill the broken process but I have to make sure I don't terminate the wrong one. They were both started about at the same time and using top I can see they both use about the same amount of memor...
On Linux, assuming you want to know what is writing to the same resource as your shell's stdout is connected to, you could do: strace -fe write $(lsof -t "/proc/$$/fd/1" | sed 's/^/-p/') That would report the write() system calls (on any file descriptor) of every process that have at least one file descriptor open on...
How to find out what process is writing to STDOUT?
1,334,824,517,000
Just for fun: Is there a way to monitor/capture/dump whatever is being written to /dev/null? On Debian, or FreeBSD, if it matters, any other OS specific solutions are also welcome.
Making /dev/null a named pipe is probably the easiest way. Be warned that some programs (sshd, for example) will act abnormally or fail to execute when they find out that it isn't a special file (or they may read from /dev/null, expecting it to return EOF). # Remove special file, create FIFO and read from it rm /dev/n...
Monitor what is being sent to /dev/null?
1,334,824,517,000
Is there a (technical or practical) limit to how large you can configure the maximum number of open files in Linux? Are there some adverse effects if you configure it to a very large number (say 1-100M)? I'm thinking server usage here, not embedded systems. Programs using huge amounts of open files can of course eat...
I suspect the main reason for the limit is to avoid excess memory consumption (each open file descriptor uses kernel memory). It also serves as a safeguard against buggy applications leaking file descriptors and consuming system resources. But given how absurdly much RAM modern systems have compared to systems 10 year...
Largest allowed maximum number of open files in Linux
1,334,824,517,000
I'm trying to use NTP to update the time on my machine. However, it gives me an error: host # ntpdate ntp1.example.org 10 Aug 12:38:50 ntpdate[7696]: the NTP socket is in use, exiting What does the error "socket is in use" mean? How can I see what is using this socket? This happens on my CentOS 4.x system, but I also...
You can do lsof -n | grep -i "TCP\|UDP" | grep -v "ESTABLISHED\|CLOSE_WAIT" to see all of your listening ports, but dollars to donuts that ntpd is running: service ntpd status And as for "What does socket in use" mean? If I can be forgiven for smoothing over some wrinkles (and for the very basic explanation, apolo...
What is using this network socket?
1,334,824,517,000
I just renamed a log file to "foo.log.old", and assumed that the application will start writing a new logfile at "foo.log". I was surprised to discover that it tracked the logfile to its new name, and kept appending lines to "foo.log.old". In Windows, I'm not familiar with this kind of behavior - I don't know if it's ...
Programs connect to files through a number maintained by the filesystem (called an inode on traditional unix filesystems), to which the name is just a reference (and possibly not a unique reference at that). So several things to be aware of: Moving a file using mv does not change that underling number unless you move...
How do open files behave on linux systems?
1,334,824,517,000
What happens to the files that are deleted while they have a file handle open to them? I have been wondering this ever since I figured out I could delete a video file while it was playing in MPlayer and it would still play through to the end. Where is it pulling the data from? Is it still coming from the hard drive? D...
The inodes still persist on disk, although no more hard links to the inodes exist. They will be deleted when the file descriptor is closed. Until then, the file can be modified as normal, barring operations that require a filename/hard link. debugfs and similar tools can be used to recover the contents of the inodes.
Where do open file handles go when they die?
1,334,824,517,000
From the Unix Power Tools, 3rd Edition: Instead of Removing a File, Empty It section: If an active process has the file open (not uncommon for log files), removing the file and creating a new one will not affect the logging program; those messages will just keep going to the file that’s no longer linked. Emptyi...
When you delete a file you really remove a link to the file (to the inode). If someone already has that file open, they get to keep the file descriptor they have. The file remains on disk, taking up space, and can be written to and read from if you have access to it. The unlink function is defined with this behaviour ...
How can a log program continue to log to a deleted file?
1,334,824,517,000
I'm using uclinux and I want to find out which processes are using the serial port. The problem is that I have no lsof or fuser. Is there any other way I can get this information?
This one-liner should help: ls -l /proc/[0-9]*/fd/* |grep /dev/ttyS0 replace ttyS0 with actual port name example output: lrwx------ 1 root dialout 64 Sep 12 10:30 /proc/14683/fd/3 -> /dev/ttyUSB0 That means the pid 14683 has the /dev/ttyUSB0 open as file descriptor 3
How to find processes using serial port
1,334,824,517,000
How to change the applications associated with certain file-types for gnome-open, exo-open, xdg-open, gvfs-open and kde-open? Is there a way by editing config files or by a command-line command? Is there a way to do this using a GUI? For both questions: How to do it per user basis, how to do it system-wide?
It's all done with MIME types in various databases. xdg-mime can be used to query and set user values.
Change default applications used by gnome-open, exo-open, xdg-open, gvfs-open and kde-open
1,334,824,517,000
I have two shells open. The first is in directory A. In the second, I remove directory A, and then recreate it. When I go back to the first shell, and type ls, the output is: ls: cannot open directory .: Stale file handle Why? I thought the first shell (the one that remained open inside a non-existent directory) woul...
A directory (like any file) is not defined by its name. Think of the name as the directory's address. When you move the directory, it's still the same directory, just like if you move to a different house, you're still the same person. If you remove a directory and create a new one by the same name, it's a new directo...
`ls` error when directory is deleted
1,334,824,517,000
Possible Duplicate: /proc/PID/fd/X link number i have a question regarding the file descriptors and their linkage in the proc file system. I've observed that if i list the file descriptors of a certain process from proc ls -la /proc/1234/fd i get the following output: lr-x------ 1 root root 64 Sep 13 07:12 0...
Do they point to some property of the resource? Yes. They're a unique identifier that allows you to identify the resource. Also why are some of the links broken? Because they're links to thinks that don't live in the filesystem, you can't follow the link the normal way. Essentially, links are being abused as a way...
File descriptor linked to socket or pipe in proc [duplicate]
1,334,824,517,000
My situation is that from time to time a specific process (in this case, it's Thunderbird) doesn't react to user input for a minute or so. I found out using iotop that during this time, it writes quite a lot to the disk, and now I want to find out which file it writes to, but unfortunately iotop gives only stats per p...
If you attach strace to the process just when it's hung (you can get the pid and queue the command up in advance, in a spare terminal), it'll show the file descriptor of the blocking write. Trivial example: $ mkfifo tmp $ cat /dev/urandom > tmp & [1] 636226 # this will block on open until someone opens for reading ...
How to find out which file is currently written by a process
1,334,824,517,000
The default open file limit per process is 1024 on - say - Linux. For certain daemons this is not enough. Thus, the question: How to change the open file limit for a specific user?
On Linux you can configure it via limits.conf, e.g. via # cd /etc/security # echo debian-transmission - nofile 8192 > limits.d/transmission.conf (which sets both the hard and soft limit for processes started under the user debian-transmission to 8192) You can verify the change via: # sudo -u debian-transmission bash ...
How to configure the process open file limit of a user?
1,334,824,517,000
I want to move large file created by external process as soon as it's closed. Is this test command correct? if lsof "/file/name" then # file is open, don't touch it! else if [ 1 -eq $? ] then # file is closed mv /file/name /other/file/name else ...
From the lsof man page Lsof returns a one (1) if any error was detected, including the failure to locate command names, file names, Internet addresses or files, login names, NFS files, PIDs, PGIDs, or UIDs it was asked to list. If the -V option is specified, lsof will indicate the search i...
Move file but only if it's closed
1,334,824,517,000
Hi everybody I want to start to say thank you for your time! I have a problem and don't really know what to do to solve the problem. When i download something and I click on the arrow in Firefox to see my downloads and then click on the folder next to the application name it should open the folder where it is saved? (...
!!!!! I don't know if this will work with other Distros then Linux Lite !!!!! What happens if you install VSCode (can be with other editors to) there is something in the code what says to you system that VSCode can open files and directories. So your system puts VSCode in front of you file Manager (Linux Lite 4.8 == T...
when clicking open folder the system launches VSCode
1,334,824,517,000
Consider this simple scenario: I open a text file ~/textfile.txt with vim in one terminal (tried with both edit and read-only modes). In a different terminal, I run /usr/sbin/lsof ~/textfile.txt Get no results Why?
When you use vi/vim to edit a file you aren't actually holding ~/<filename>open you are reading the file into ~/.<filename>.swp and then holding that temp file open. If you run lsof ~/.<filename>.swp it will show you the information you are looking for. NOTE: If you have multiple people editing the same file you will ...
lsof doesn't return files open by the same user
1,334,824,517,000
After installing Wine, Notepad has became a default application to open unknown textual files by double click. I'd like to eliminate this behaviour and remove Notepad from the list of applications offered to open an unknown type file. I've deleted /usr/share/applications/wine-notepad.desktop, but this didn't help. How...
I had that problem too some months ago, and I remember I had to delete some .desktop files that were inside the $HOME/.local/share/applications folder. I think you should delete any file that has notepad as part of its name, and also you should try to delete (or move somewhere else) the files wine-extension-*.
How to remove Notepad from the applications list?
1,334,824,517,000
I've had a mac at work lately, and was amazed to see that Xcode would still find my latest project after I renamed its folder and moved it someplace else. Now I understand that this is the result of a heavy infrastructure at work, but I was wondering if it would be possible to somehow come up with similar functionali...
Well on Linux you could use inotify to track changes to your files. Inotify is in-kernel and has bindings to many different languages allowing you to quickly script such functionality if the app you are working with does not support inotify yet.
Strategies for maintaining a reference to a file after it was moved or renamed?
1,334,824,517,000
I have a process running very long time. I accidentally deleted the binary executable file of the process. Since the process is still running and doesn't get affected, there must be the original binary file in somewhere else.... How can I get recover it? (I use CentOS 7, the running process is written in C++)
It could only be in memory and not recoverable, in which case you'd have to try to recover it from the filesystem using one of those filesystem recovery tools (or from memory, maybe). However! $ cat hamlet.c #include <unistd.h> int main(void) { while (1) { sleep(9999); } } $ gcc -o hamlet hamlet.c $ md5sum hamlet 3055...
How to recover the deleted binary executable file of a running process
1,358,579,170,000
I have an hourly hour-long crontab job running with some mtr (traceroute) output every 10 minutes (that is going to go for over an hour prior to it being emailed back to me), and I want to see the current progress thus far. On Linux, I have used lsof -n | fgrep cron (lsof is similar to BSD's fstat), and it seems like ...
The file can be access through the /proc filesystem: you already know the PID and the FD from the lsof output. cat /proc/21742/fd/5
How can I access a deleted open file on Linux (output of a running crontab task)?
1,358,579,170,000
I started downloading a big file and accidently deleted it a while ago. I know how to get its current contents by cping /proc/<pid>/fd/<fd> but since the download is still in progress it'll be incomplete at the time I copy it someplace else. Can I somehow salvage the file right at the moment the download finishes but ...
Using tail in follow mode should allow you to do what you want. tail -n +0 -f /proc/<pid>/fd/<fd> > abc.deleted I just did a quick test and it seems to work here. You did not mention whether your file was a binary file or not. My main concern is that it may not copy from the start of file but the -n +0 argument shoul...
Recover deleted file that is currently being written to
1,358,579,170,000
On Linux: Normally pseudo terminals are allocated one after the other. Today I realized that even after a reboot of my laptop the first opened terminal window (which was always pts/0 earlier) suddenly became pts/5. This was weird and made me curious. I wanted to find out which process is occupying the device /dev/...
If you have fuser installed and have the permission to use sudo: for i in $(sudo fuser /dev/pts/0); do ps -o pid= -o command= -p $i done eg: 24622 /usr/bin/python /usr/bin/terminator 24633 ksh93 -o vi
Which process is occupying a certain pseudo terminal pts/X?
1,358,579,170,000
I have always been confused why the file manager in Linux cannot stop applications from opening a single file twice at the same time? Specifically, I want to stop the PDF file reader Okular from opening the file A.pdf again when I have already opened it. I need to get an warning or just show me the opened copy of the ...
A file manager is responsible for invoking applications to open a file. It has no control over what the application does with the file, and in particular whether the application will open a new window if you open the same file twice. Having the same file open in multiple windows can be useful, for example when you wan...
If I open the same file twice in Okular, switch to the existing window
1,358,579,170,000
How do I get Emacs to recognize new file extensions? For example if I have a .c file and I open it in Emacs, I get the correct syntax highlighting for C, but if I have a .bob file format (which I know to be C), how do I tell Emacs to interpret it in the same way as a .c file?
This is described on Emacs Beginner's Howto. With the line (setq auto-mode-alist (cons '("README" . text-mode) auto-mode-alist)) You tell emacs to enter "text-mode" if you open a file which is named README. with (setq auto-mode-alist (cons '("\\.html$" . html-helper-mode) auto-mode-alist)) (setq auto-mode-alist (con...
Emacs' file extension recognition
1,358,579,170,000
I have a shell script which will be executed by multiple instances and if an instance accessing a file and doing some operation how can I make sure other instances are not accessing the same file and corrupting the data ? My question is not about controlling the parallel execution but dealing with file lock or flaggin...
Linux normally doesn't do any locking (contrary to windows). This has many advantages, but if you must lock a file, you have several options. I suggest flock: apply or remove an advisory lock on an open file. This utility manages flock(2) locks from within shell scripts or from the command line. For a single command...
How to make sure only one instance accessing the file at a time in a folder?
1,358,579,170,000
I have an application running that is generating a large (~200GB) output file, and takes about 35 hours to run (currently I'm about 12 hours in). The application just opens the file once then keeps it open as it is writing until it is complete; the application also does a lot of random access writes to the file (i.e. ...
Posting another solution, since file is being written randomly, breaks my tail idea. Thinking rsync might be promising here, since the rsync can operate using a delta transfer algorithm, saving transfer time by only sending the changed parts of a file. If you run rsync on two local files, it will default to --whole-...
Moving an open file to a different device
1,358,579,170,000
Are file descriptors unique across a process, or throughout the whole system. Because every file seems the use the same descriptor for stdin and stdout. Is there something special with these? How does stdin and stdout work? I realize the dev/fd, is a link to proc/self/fd, but how do they all have the same number? Ed...
Several things might be confusing here. Filedescriptors are attached to a file (in the general sense) and are specific to a given process. Filedescriptors are themselves referred to via numeric ids by their associated process, but one file descriptor can have several ids. Example: ids 1 and 2 which are called standard...
file descriptors and /dev/fd
1,358,579,170,000
Standard file descriptors <= 2 are opened by default. A program can write to or read from, a file descriptor after 2, without using the open system call to obtain such a descriptor. The program can then advertise in its manual, which file descriptors it is using and how. To make use of this, a POSIX shell can open a...
When you use process substitution with <(...) or >(...), bash will open a pipe to the other program on an arbitrary high file descriptor (I think it used to count up from 10, but now it counts down from 63) and pass the name as /dev/fd/N on the command line of the first program. This isn't POSIX, but other shells also...
Which programs use a file descriptor higher than 2?
1,358,579,170,000
My understanding is that a file descriptor is an integer which is a key in the kernel's per-process mapping to objects such as open()ed files, pipes, sockets, etc. Is there a proper, short, and specific name for “open files/sockets/pipes/...”, the referents of file descriptors? Calling them “files” leads to confusion ...
There is not really a more specific term. In traditional Unix, file descriptors reference entries in the file table, entries of which are referred to as files, or sometimes open files. This is in a specific context, so while obviously the term file is quite generic, in the context of the file table it specifically ref...
What is the referent of a file descriptor?
1,358,579,170,000
I use tmux at work as my IDE. I also run vim in a variety of tmux panes and will fairly often background the process (or alternatively I just close the window - I have vim configured not to remove open buffers when the window is closed). Now I've a problem, because a file that I want to edit is open in one of my other...
It doesn't tell me everything, but I used fuser ~/.myfile.txt.swp which gave me the PID of the vim session. Running ps aux | grep <PID> I was able to find out which vim session I was using, which gave me a hint as to which window I had it open in. Thanks to Giles's inspiration and a bit of persistence and luck, I came...
Is it possible to find which vim/tmux has my file open?
1,358,579,170,000
I have a Solaris 10 server with autofs-mounted home dirs. On one server they are not unmounted after the 10 min timeout period. We've got AUTOMOUNT_TIMEOUT=600 in /etc/default/autofs, I ran automount -t 600, disabled and re-enabled svc:/system/filesystem/autofs:default service and nothing seems to work. My suspicion i...
You could try rwsnoop (http://dtracebook.com/index.php/File_System:rwsnoop) to monitor i/o access using dtrace: # rwsnoop - snoop read/write events. # Written using DTrace (Solaris 10 3/05). # # This is measuring reads and writes at the application level. This matches # the syscalls read, write, pread and pw...
What process is accessing a mounted filesystem sporadically?
1,358,579,170,000
Under Linux I often use /proc/<pid>/fd/[0,1,2] to access std[in,out,err] of any running process. Is there a way to achieve the same result under FreeBSD and/or macOS ?
See this StackOverflow link for a dtrace based answer to this. I've tested it on FreeBSD and it works perfectly: capture() { sudo dtrace -p "$1" -qn ' syscall::write*:entry /pid == $target && arg0 == 1/ { printf("%s", copyinstr(arg1, arg2)); } ...
Grab standard input/ouput of a running process under FreeBSD/macOS
1,358,579,170,000
I'm trying to increase the default file descriptor limits for processes on my system. Specifically I'm trying to get the limits to apply to the Condor daemon and its sub-processes when the machine boots. But the limits are never applied on machine boot. I have the limits set in /etc/sysctl.conf: [root@mybox ~]# cat /e...
Add ulimit -n 262144 to the condor init script.
File descriptor limits are lost after a system reboot
1,358,579,170,000
Let's say you open a file on which you have write permission. Meanwhile you change permissions and remove write permission while you still have the file open in some editor. What will happen if you edit and save it?
The permissions of a file are checked when the file is opened. Changing the permissions doesn't affect what processes that already have the file open can do with it. This is used sometimes with processes that start with additional privileges, open a file, then drop those additional privileges: they can still access th...
File permissions and saving
1,358,579,170,000
I constantly have many PDF files open. These are usually downloaded using chrome and immediately opened using evince. I sometimes want to persist the state of all my open PDF files, so I could re-open the same group of documents at a later time. This mostly happens when I need to reboot and what to have the same set ...
Under the assumption that the PDFs you are viewing have the extension .pdf, the following could work to get you a list of open PDFs: $ lsof | grep ".pdf$" If you only ever use Evince, see Gilles' similar answer. On my machine (with a few pdfs open) displayed output as follows evince 6267 myuser 14u REG ...
retrieving names of all open pdf files (in evince or otherwise)
1,358,579,170,000
When I open files by double-clicking a file with mouse I always get one additional "Unsaved Document X".. which is very annoying, because I have to close tham all, and click "Close without save" every time... This happens in dolphin, nautilus and krusader (those are the ones where I tried it, so I gues it's not becaus...
Felrood from Arch Linux forums provided a solution and I would like to share it here and close this question. Gedit seems to display data from stdin in a new "Unsaved document". For example: echo "foobar" | gedit What can be done is this: right click the Kmenu button -> edit applications -> find gedit there (for ...
Gedit opening an "Unsaved document" on opening files with mouse
1,358,579,170,000
Introduction Until recently, I thought that on ext file system, inodes have reference counters which count the number of times the file is referenced by a directory entry or a file descriptor. Then, I learned that the reference counter only counts the number of directory entries referencing it. To falsify this, I read...
You are confusing two different counters: the file system link counter and the file descriptor reference counter. The file system link counter counts how many links to an inode are in the file system itself. The inode is the structure that contains the file metadata. In ext* file systems this counter is stored in the...
When is a file freed in an ext file system?
1,358,579,170,000
I'm runnig a java server on Debian with this command: java -jar myapp.jar [args] >> log.txt Once I gzipped the log file to send it and then I realized the original file was gone, leaving me with only the .gzip. Although I created the file manually (and also tried to unzip the original) the app didn't log anymore t...
From man gzip: -k, --keep Keep (don't delete) input files during compression or decompression. So gzip -k log.txt should do the trick. (But generally, a real logging solution, i.e., some syslog daemon, maybe with using log4j, could possibly be preferable.)
Capturing new output after deleting the output file
1,358,579,170,000
This question is NOT a duplicate of this question Find out current working directory of a running process?, because the writing directory can be different from the working directory. For example, I start two processes by running my_script.sh in ~/ twice (one right after another). In my_script.sh, I have made it to wri...
This depends on how the script is writing: If directly by redirection (i.e. my_script.sh > ~/1212_000001/some_file), you can use lsof -p <script-pid> and you'll see the open file on your output directory Else, the output of ps axjf' will show you the pid dependencies of sub-processes launched by your script, which m...
Find out to which directory a process is writing?
1,358,579,170,000
I am running Debian wheezy. File limits are increased to 100000 for every user. ulimit -a and ulimit -Hn / -Sn show me the right amounts of maximum open file limits even in screen. But for some reason I am not able to to have more than ~4000 connections / open files. from sysctl.conf: net.ipv4.tcp_fin_timeout = 1 net....
There are two settings that limit the number of open files: a per-process limit, and a system-wide limit. The system-wide limit is set by the fs.file-max sysctl, which can be configured in /etc/sysctl.conf (read at boot time) or set on the fly with the sysctl command or by writing to /proc/sys/fs/file-max. The per-pro...
Daemon's open file limit is reached even though the system limits have been increased
1,358,579,170,000
I have an external hard drive on which I use rsync to backup my home directory. Today, I tried to umount the drive. It said it was busy. So, I used fuser to figure out who was using it: /media/Panp9 Backups: 10198rce 10283rce 10284rce 10337rce 10338rce 10339rce 10341rce 10345rce 10348rce 10353rce 10354rce 10356rce 1...
“Most of these seem to be various bids of Nepomuk.” But you killed all of them, not just the Nepomuk ones. So some other process must have been caught in the fray — presumably one critical to KDE, without which the window manager or the session manager crashed, possibly the window manager or the session manager itself...
Bizarre Disk Use Problem
1,358,579,170,000
For example, the IDE I'm using at the moment (Aptana Studio) notifies me as soon as a file's contents it has open have been changed by some external program. I can imagine having a periodic loop run stat() on a file and check the time of last data modification. Is this how it's normally done or is there a blocking int...
The inotify system on Linux, or the kqueue system on BSD/OSX, gives you an event-driven ("interrupt-like") mechanism to do this.
Efficient mechanism to determine if open file has been externally modified?
1,358,579,170,000
On my machine, ulimit -n returns 2560 Given that -n returns The maximum number of open file descriptors. Does it mean that system won't allow more then 2560 open files to be out there at any given time? If not, how can i find out what is a hard limit system imposes on open files?
File descriptors are created for pretty much everything (since everything in Linux is a file), from connecting to another computer over the internet to running most applications. The resource limit is for that particular point in time. Keep in mind that even after the resource isn't being used, it can take several cy...
Max Open Files, clarification needed
1,358,579,170,000
The so-called "standard streams" in Linux are stdin, stdout, and stderr. They must be called "standard" for a reason. Are there non-standard streams? Are those non-standard streams fundamentally treated differently by the kernel?
In this context, a “stream” is an open file in a process. (The word “stream” can have other meanings that are off-topic here.) The three standard streams are the ones that are supposed to be already open when a program starts. File descriptor 0 is called standard input because that's where a program is supposed to rea...
Are there "non-standard" streams in Linux/Unix?
1,358,579,170,000
First use Vim to edit a file, say /tmp/A. Assuming that vim process is the only one that accesses /tmp/A, then use "ctrl+z" to suspend the process, and execute fuser /tmp/A Then you see nothing in the output. However, if you use "less" to open that file, you could see the pid of less in the fuser output. Is there an...
Yes, vim does not open the file until it needs to save it. Instead, vim uses a temporary hidden swap file to save changes you make incrementally. Once you save the file (:w) it will write to the original file. You can see that for yourself by using lsof, i.e.: $ lsof -n -p `pidof vim` COMMAND PID USER FD TYPE DEV...
vim not shown in fuser
1,358,579,170,000
It is well known that UNIX systems won't actually delete a file on disk while the file is in use. So if a file is being accessed by process 1 and process 2 deletes the file using rm, process 1 continues to see the file; additionally the file descriptor link at /proc/(process 1 id)/fd reports the original contents of ...
If process 1 has already started reading the file before process 2 overwrites it, then it will have some part of the contents stored in the stdio buffer. Once it crosses the buffer-size boundary it will be forced to go to the kernel, and then it will find the new overwritten contents.
File delete versus overwrite and link at /proc/pid/fd
1,358,579,170,000
I'm trying the preview release of Flash Player "Square" for Linux and noticed that video files are now being deleted from the /tmp/ folder. Yet the files are still in use (I can see them with lsof): chromium- 8948 user 25u REG 8,5 2599793 229908 /tmp/FlashXXStJt3K (deleted) Is there a way to pr...
I've never tried this before, but... There is a link to the file in /proc/8948/fd/. You can catenate the file as root (it's only readable as root), and pipe it to a new file. Whether the file is intact, I've not verified.
Prevent libflashplayer.so from deleting a file?
1,350,495,713,000
I have more than 1000 lines in a file. The file starts as follows (line numbers added): Station Name Station Code A N DEV NAGAR ACND ABHAIPUR AHA ABOHAR ABS ABU ROAD ABR I need to convert this to a file, with comma separated entries by joining every two lines. The final data should look like Station Name,Station Code...
Simply use cat (if you like cats ;-)) and paste: cat file.in | paste -d, - - > file.out Explanation: paste reads from a number of files and pastes together the corresponding lines (line 1 from first file with line 1 from second file etc): paste file1 file2 ... Instead of a file name, we can use - (dash). paste takes...
Text processing - join every two lines with commas
1,350,495,713,000
I have the following two files ( I padded the lines with dots so every line in a file is the same width and made file1 all caps to make it more clear). contents of file1: ETIAM...... SED........ MAECENAS... DONEC...... SUSPENDISSE contents of file2 Lorem.... Proin.... Nunc..... Quisque.. Aenean... Nam...... Vivamus...
Assuming you don't have any tab characters in your files, paste file1 file2 | expand -t 13 with the arg to -t suitably chosen to cover the desired max line width in file1. OP has added a more flexible solution: I did this so it works without the magic number 13: paste file1 file2 | expand -t $(( $(wc -L <file1) + 2 ...
A better paste command
1,350,495,713,000
I have ±10,000 files (res.1 - res.10000) all consisting of one column, and an equal number of rows. What I want is, in essence, simple; merge all files column-wise in a new file final.res. I have tried using: paste res.* However (although this seems to work for a small subset of result files, this gives the following ...
If you have root permissions on that machine you can temporarily increase the "maximum number of open file descriptors" limit: ulimit -Hn 10240 # The hard limit ulimit -Sn 10240 # The soft limit And then paste res.* >final.res After that you can set it back to the original values. A second solution, if you cannot c...
Combining large amount of files
1,350,495,713,000
How do I join two files vertically without any separator? I tried to use paste -d"" a b, but this just gives me a. Sample file: 000 0 0 0 0001000200030004 10 20 30 40 2000 4000 .123 12.1 1234234534564567
paste use \0 for null delimiter as defined by POSIX: paste -d'\0' file1 file2 Using -d"" a b is the same as -d a b: the paste program sees three arguments -d, a and b, which makes a the delimiter and b the name of the sole file to paste. If you're on a GNU system (non-embedded Linux, Cygwin, …), you can use: paste -d...
paste files without delimiter
1,350,495,713,000
In Linux, I have the following problem with paste from (GNU coreutils) 8.13: Trying to set another delimiter than the default (TAB) results in either just printing the first character of the defined delimiter or perfectly ignoring it. Question: How does one define (multiple) delimiters when using paste? Simply using, ...
To have abc inbetween file1 and file2, you can do: paste -d abc file1 /dev/null /dev/null file2 Or: paste -d abc file1 - - file2 < /dev/null If you want two tabs: paste file1 /dev/null file2
paste command: setting (multiple) delimiters
1,350,495,713,000
I'm trying to figure out a way to copy the current text in a command line to the clipboard WITHOUT touching the mouse. In other words, I need to select the text with the keyboard only. I found a half-way solution that may lead to the full solution: Ctrl+a - move to the beginning of the line. Ctrl+k - cuts the entire l...
If using xterm or a derivative you can setup key bindings to start and end a text selection, and save it as the X11 primary selection or a cutbuffer. See man xterm. For example, add to your ~/.Xdefaults: XTerm*VT100.Translations: #override\n\ <Key>KP_1: select-cursor-start() \ select-cursor-end(PRIMARY...
How to copy text from command line to clipboard without using the mouse?
1,350,495,713,000
File1: .tid.setnr := 1123 .tid.setnr := 3345 .tid.setnr := 5431 .tid.setnr := 89323 File2: .tid.info := 12 .tid.info := 3 .tid.info := 44 .tid.info := 60 Output file: .tid.info := 12 .tid.setnr := 1123 .tid.info := 3 .tid.setnr := 3345 .tid.info := 44 .tid.setnr := 5431 .tid.info := 60 .tid.setnr := 89323
Using paste: paste -d \\n file2 file1
Merge alternate lines from two files
1,350,495,713,000
I often do operations like paste <(cut -d, -f1 file1.csv) <(cut -d, -f1 file2.csv) which is very tedious with more than a few files. Can I automate this process, e.g. with globbing? I can save the cut results with typeset -A cut_results for f in file*.csv; do cut_results[$f]="$(cut -d, -f1 $f)" done but I'm not ...
You can automate this with globbing, specifically the e glob qualifier, plus eval, but it isn't pretty and the quoting is tricky: eval paste *.csv(e\''REPLY="<(cut -d, -f1 $REPLY)"'\') The part between \'…\' is some code to execute for every match of the glob. It is executed with the variable REPLY set to the match,...
How can I apply `cut` to several files and then `paste` the results?
1,350,495,713,000
Now,I have two files: aaaa.txt: a=0; b=1; c=2; bbbb.txt: d=3 e=4 f=5 I want to merge aaaa.txt and bbbb.txt to cccc.txt. cccc.txt as follow: a=0;d=3 b=1;e=4 c=2;f=5 So, what can I do for this?
You can use paste for this: paste -d '\0' aaaa.txt bbbb.txt > cccc.txt From your question, it appears that the first file contains ; at the end. If it didn't, you could use that as the delimiter by using -d ';' instead. Note that contrary to what one may think, with -d '\0', it's not pasting with a NUL character as t...
How to merge two files in corresponding row?
1,350,495,713,000
I have 20 tab delimited files with the same number of rows. I want to select every 4th column of each file, pasted together to a new file. In the end, the new file will have 20 columns with each column come from 20 different files. How can I do this with Unix/Linux command(s)? Input, 20 of this same format.  I want th...
with paste under bash you can do: paste <(cut -f 4 1.txt) <(cut -f 4 2.txt) .... <(cut -f 4 20.txt) With a python script and any number of files (python scriptname.py column_nr file1 file2 ... filen): #! /usr/bin/env python # invoke with column nr to extract as first parameter followed by # filenames. The files sho...
Select certain column of each file, paste to a new file
1,350,495,713,000
Let's assume that I've got two text file a, b. $cat a a a a a a a $cat b b b b b b b Then, I want to merge these two file vertically by using paste. The merged file is shown bellow a a a a a a b b b b b b NOT $paste a b > AB $cat AB a a b b a a b b a a b b
Just cat a.txt b.txt > out.txt. If you want even spaces and no blank lines $ awk 'NF' inputA.txt inputB.txt a a a a a a b b b b b b
How to merge text file vertically? [duplicate]
1,350,495,713,000
I want to output two text files in two columns — one on the left side and other one on the right. paste doesn't solve the problem, because it only insert a character as delimiter, so if the first file has lines of different lengths output will be twisted: $ cat file1 looooooooong line line $ cat file2 hello world $ pa...
What about paste file{1,2}| column -s $'\t' -tn? looooooooong line line hello line world This is telling column to use Tab as columns' separator where we takes it from the paste command which is the default seperator there if not specified; generally: paste -d'X' file{1,2}| column -s $'X' -tn whe...
Print two files in two columns side-by-side
1,350,495,713,000
art_file (cat -A output): .::""-, .::""-.$ /:: \ /:: \$ |:: | _..--""""--.._ |:: |$ '\:.__ / .' '. \:.__ /$ ||____|.' _..---"````'---. '.||____|$ ||:. |_.' `'.||:. |$ ||:.-'` .-----. ';:. |$ ||/ .' ...
The trouble is each line has a different length. The easiest solution is to give a large enough width to pr: pr -mtw 150 art_file caption_file If you want the caption text to get closer, I suggest awk ' l<length && NR<=n{l=length} NR!=FNR{ printf "%-"l"s", $0 getline line < "caption" print line } ' ...
What is the correct way to merge two ASCII art files side by side while preserving alignment?
1,350,495,713,000
What is the fastest command line way to merge the different lines of files? For example, I have two files: a.txt: foo bar foobar b.txt foo foobar line by bar And I would like to get the following output: foo bar foobar line by Is there any fast way to merge files like the example above? (The order of the lines ...
Use awk seen if you don't want to sort the file: $ awk '!seen[$0]++' a.txt b.txt foo bar foobar line by
How to merge the different lines of files?
1,350,495,713,000
I have a paste command like this paste -d , file1.csv file2.csv file3.csv And file2.csv contains numbers like this 0.2 0.3339 0.111111 I want the values in file2.csv having 3 decimals like this: 0.200 0.334 0.111 For one value this is working: printf "%.3f" "0.3339" -> 0.334 But for multiple values in file2.csv this...
You're close; you just need to tell printf to zero-pad to the right of the decimal point: $ cat 736678.txt 0.2 0.3339 0.111111 $ for value in $( cat 736678.txt ); do printf "%.3f\n" "$value"; done 0.200 0.334 0.111 The format string %.3f means "a floating-point number with precisely three decimal places to the right ...
Rounding many values in a csv to 3 decimals (printf ?)
1,350,495,713,000
I want to merge two files like How to merge two files based on the matching of two columns? but one file may not have all results. So for example file1 1 dog 2 cat 3 fox 4 cow file2 1 woof 2 meow 4 mooh wanted output 1 dog woof 2 cat meow 3 fox 4 cow mooh
With GNU awk for arrays of arrays: $ awk '{a[$1][(NR>FNR)]=$2} END{for (i in a) print i, a[i][0], a[i][1]}' file{1,2} 1 dog woof 2 cat meow 3 fox 4 cow mooh or with any awk: $ awk '{keys[$1]; a[$1,(NR>FNR)]=$2} END{for (i in keys) print i, a[i,0], a[i,1]}' file{1,2} 1 dog woof 2 cat meow 3 fox 4 cow mooh Although th...
Merging files based on potentially incomplete keys
1,350,495,713,000
Here is the weak attempt at a paste command trying to include a newline: paste -d -s tmp1 tmp2 \n tmp3 \n tmp4 tmp5 tmp6 > tmp7 Basically I have several lines in each tmp and I want the output to read First(tmp1) Last(tmp2) Address(tmp3) City(tmp4) State(tmp5) Zip(tmp6) Am I way off base with using a newline in ...
Try this solution with two extra temporary files: paste tmp1 tmp2 > tmp12 paste tmp4 tmp5 tmp6 > tmp456 paste -d "\n" tmp12 tmp3 tmp456 > tmp7 This solution was based on the assumption that the -d option selects the delimiter globally for all input files so it either be a blank or a newline. In a way this is true sin...
Trying to add a newline to the paste command
1,350,495,713,000
I have a huge amount of files having the following naming style: WBM_MIROC_rcp8p5_mississippi.txt WBM_GFDL_rcp8p5_nosoc_mississippi.txt DBH_HADGEM_rcp4p5_co2_mississippi.txt HMH_IPSL_rcp4p5_mississippi.txt Those files represent tables with (some of them have a tab delimiter and other one space delimiter) as follow: Y...
The most likely answer is that your data file columns are not separated by tabs, but by space, for example. You can verify this by running one of them through cat -vet which shows real tabs as ^I. To change your cut command to use space as a delimiter you need to add the arg -d' ', but since you are already inside sin...
Build table - Add column depending on filenames
1,350,495,713,000
I would like to merge specific columns from two txt files containing varying number of rows, but same number of columns (as shown below): file1: xyz desc1 12 uvw desc2 55 pqr desc3 12 file2: xyz desc1 56 uvw desc2 88 Preferred output: xyz desc1 12 56 uvw desc2 55 88...
If column-order is important, i.e. numbers from the same file should be kept in the same column, you need to add padding while reading the different files. Here is one way that works with GNU awk: merge.awk # Set k to be a shorthand for the key { k = $1 SUBSEP $2 } # First element with this key, add zeros to align it...
UNIX paste columns and insert zeros for all missing values
1,350,495,713,000
I have a text file in the below format: $data This is the experimental data good data This is good file datafile 1 4324 3673 6.2e+11 7687 67576 2 3565 8768 8760 5780 8778 "This is line '2'" 3 7656 8793 -3e+11 7099 79909 4 8768 8965 8769 9879 0970 5 5878 9879 7.970e-1 9070 0709799 . . . 100000 3655 6868 97...
Since you haven’t asked for a 100% awk solution, I’ll offer a hybrid that (a) may, arguably, be easier to understand, and (b) doesn’t stress awk’s memory limits: awk ' $1 == 2 { secondpart = 1 } { if (!secondpart) { print > "top" } else { print $1, $2 > "left" ...
Replace data at specific positions in txt file using data from another file
1,350,495,713,000
I want to take a file that has a list of words on each line of its own eg. act bat cat dog eel and put them into one line with comma separated and quoting them. So that it turns out like: 'act', 'bat', 'cat', 'dog', 'eel', so, a single quote, then the word, then another single quote, then a comma, then a space. Then...
Using awk: awk '{printf ("'\'%s\'', ", $0)}' infile > new_file 'act', 'bat', 'cat', 'dog', 'eel', Or to avoid adding an extra comma at the end, use below instead. awk '{printf S"'\''"$0"'\''";S=", "}' 'act', 'bat', 'cat', 'dog', 'eel' Or using paste without quoting. paste -d, -s infile act,bat,cat,dog,eel Then qu...
Combine list of words into one line, then add characters
1,350,495,713,000
I have two files that I am trying to merge, one file is: linux$ cat temp2 linear_0_A_B linear_0_B_A 103027.244444 102714.177778 103464.311111 102876.266667 103687.422222 103072.711111 103533.244444 102967.733333 103545.066667 102916.933333 103621.555556 103027.511111 104255.776536 103006.256983 103712.178771 102877.13...
You wrote in your last block, linux$ paste temp2 temp > temp2 You cannot do this. (Well you can, but it won't work.) What happens here is that the shell truncates temp2 ready to send output from the paste command. The paste temp2 temp command then runs - but by this stage temp2 is already zero length. What you can ...
Problem with paste and standard output in linux
1,350,495,713,000
So I have: $ cat fruits 2 bananas 3 cherries 4 figs 5 dates 6 elderberries 7 apples 8 grapes and 1 $ cat prices 2 2.18 3 4.11 4 1.69 5 4.52 6 1.73 7 1.01 8 1.09 Every line from 'fruits' corresponds with the same line from 'prices'. How I can sort the fruits in alphabetical order using cut `n paste, so that the 'pric...
$ paste prices fruits | sort -k2 | cut -f1 1.01 2.18 4.11 4.52 1.73 1.69 1.09 paste combines the two files, line by line. sort -k2 sorts them on the second column (the fruit name). cut -f1 returns just the first column (the prices). For the above, I assumed that the line numbers shown in the display of the fruits a...
Cut and Paste Commands
1,350,495,713,000
I can create a file with multiple columns from a single-column input via paste like this: some_command | paste - - This works when some_command produces the data in column-major format. In other words, the input 1 2 3 4 results in the output 1 2 3 4 However, I want the opposite, i.e. I want 1 3 2 4 Backgro...
If I correctly understand the question, you could try with pr: cut -f 5 "${files[@]}" | pr -5 -s' ' -t -l 40 where -5 is the number of columns, -s' ' is the separator (space) and -l 40 is the page length (40 lines). Without coreutils, one could use split to create pieces of N lines: split -lN infile or some_command ...
Use paste with row-major input
1,350,495,713,000
paste is a brilliant tool, but it is dead slow: I get around 50 MB/s on my server when running: paste -d, file1 file2 ... file10000 | pv >/dev/null paste is using 100% CPU according to top, so it is not limited by, say, a slow disk. Looking at the source code it is probably because it uses getc: while (chr ...
Did some further tests with different alternatives and scenarios (Edit: cutoff values for compiled version supplemented) tl;dr: yes, coreutils paste is far slower than cat there seems no easily available alternative that is uniformly faster than coreutils paste, in particular not for lots of short lines paste is amaz...
Fast version of paste
1,350,495,713,000
The original output file contained this block of text among much more information: Projecting out rotations and translations Force Constants (Second Derivatives of the Energy) in [a.u.] GX1 GY1 GZ1 GX2 GY2 GX1 0.6941232 ...
Managed to solve my own problem by simply assigning the specific line and column as a variable, and concatenating them using echo, simple when you know the answer! #!/bin/bash cd FREQ/HF rm Hessian.log for i in *.out do grep -H -A16 "Force Constants (Second Derivatives of the Energy)" $i | tail -n +1 >> Hessian.tm...
How do I append text from one line, to the end of another?