date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,294,409,653,000 |
man git-format-patch makes reference of the UNIX mailbox format which is a term I am unfamiliar with.
A google search for "UNIX mailbox format" and similar expressions lists many hits with the term mbox in it.
There is even a man page (man mbox) for mbox.
I am lead to conclude that mbox and the UNIX mailbox format are the same thing, but I am not 100% sure.
Can someone confirm (or deny) my assumption?
|
Can someone confirm (or deny) my assumption?
Yes, Both are same.
UNIX mbox format is used by AsyncOS when messages are archived (in anti-spam and anti-virus configuration) and logged (in the message filter log() action).
mbox is traditional UNIX mailbox format. Users' INBOX mboxes are commonly stored in /var/spool/mail or /var/mail directory. Single file contains multiple messages and is the most common format for storing email messages on a hard drive. All the messages for each mailbox are stored as a single, long, text file in a string of concatenated e-mail messages, starting with the “From” header of the message.
| What is the unix mailbox format? |
1,294,409,653,000 |
I am using GParted for formatting my usb-device(a pendrive). And, I unmounted my device using the GUI program itself. The device doesn't show mounted anymore.
When I try to format my usb-device(/dev/sdc1) with ntfs-format, the ntfs operation seems disabled in the GParted (GNOME Partition Editor) GUI. Rest other options like fat32, ext3, ext4, etc. are already enabled. I think that has something to do with missing ntfs package/library on my system.
Does anybody know which extra package(s) is to be installed for the same for CENTOS 7? (fedora package)
|
OK, finally I got the solution. Though I was already having package ntfs-3g previously installed, it was not sufficient enough for formatting of the usb-drive in ntfs-format.
One needs to install ntfsprogs --- a subpackage of ntfs-3g for enabling the ntfs-format type partition in GParted.
I installed it using
sudo yum install ntfsprogs
The ntfsprogs package currently consists of a library and utilities
such as mkntfs, ntfscat, ntfsls, ntfsresize, and ntfsundelete (for a
full list of included utilities see man 8 ntfsprogs after
installation).
| NTFS formatting disabled while using Gparted in CentOS 7 |
1,294,409,653,000 |
I have a memory stick that is not mounted in Ubuntu and it doesn't works in Windows.
I can check is connected using
Bus 003 Device 011: ID 05e3:0727 Genesys Logic, Inc. microSD Reader/Writer
But I cannot mount it. I want to try formatting it to check if is detected again in Windows and Linux.
How could I do without mount it?
|
There are many ways to format a USB
Command line
Type this command in the terminal which will help you identify the USB name i.e: sdb,sdc,etc...
sudo fdisk -l
Make sure the USB is not mounted, if yes then you need to unmount it:
umount /dev/sdX
Replace sdX with your device name
Delete any existing partitions (from the SD card only).
Enter the following on the command line (replacing x with the letter identified in step
sudo fdisk /dev/sdx
list the existing partitions by typing p.
delete them by issuing the d command (repeat as needed until all partitions have been removed).
Create a new partition.
Type n to create a new partition.
Type p to create a primary partition.
Type 8192 to select the sector. (See note)
Press enter to select the default last sector.
Type t to change the type of the partition.
Type b to select the FAT32 type.
Type w to save (write the changes to disk).
Format the new partition
Now to format the USB using FAT32,
mkfs.vfat16 /dev/sdX
To format it using FAT16
mkfs.vfat32 /dev/sdX
After this reboot and check if you can see your USB in windows.
Disk Uility
To install Disk Utility if it is not installed:
sudo apt-get install gnome-disk-utility
After opening the app, click on the rectangle icon to unmount the USB
Then as in the picture click on "Format"
Gparted
NOTE: This method doesn't work for you since you have a problem with detecting your USB, but I have included it for others in the future
To install Gparted if it is not installed:
sudo apt-get install gparted
An advantage when using Gparted is that you can identify the USB name easily, as you can see from the picture below. My USB is mounted with the name sdb
Then you can click on "Format To"
| How to format a USB storage not detected in Ubuntu? |
1,294,409,653,000 |
Is there a way to restrict directory content by the file type?
For example, I have an upload directory that I only want users to put images in, could I go a step further & actually put a restriction that will only allow images in it?
I thought about a CRON Job to check the file extension but wondered is there was another way....
|
There are a number of problems with trying to enforce this "after the fact" using a cron job or similar:
Race condition. Regardless of which method you use, if you have some program or some code that will be looking through the directory and may pick up and use files you don't want it to interact with, the only way to actually prevent it from doing that is to either (1) code it up that way (or modify the existing source code if it's open source); or (2) don't allow the files to be placed in that directory in the first place.
You didn't state which OS you're running, but on Linux, you could somewhat efficiently use fanotify to monitor file creation/renaming within the directory, and take action whenever a disallowed file is created or renamed to a disallowed name or contents. Of course, this is inherently a race condition, so if you have other code that will be accessing the directory while "users" (other programs or user accounts) will be placing files in there, it is impossible for you to absolutely prevent those files from being picked up (and possibly read, before your code has a chance to delete/move them).
What I would suggest is this:
Create a new group, or use an existing group, whose only members are users you trust not to run any programs that will place the unwanted files in that directory.
Set the permissions on the directory to something like 770 or 775 and set the group and user owner appropriately, using chmod and chown, respectively. This will prevent users outside of that group from accessing (resp. writing to) that directory, so the program(s)/user(s) which you don't trust to place the correct files in there will be excluded from doing so by the discretionary access control mechanism of the filesystem. This should work on almost any UNIX-alike (even Windows, but the permissions system is slightly different there). Just make sure you aren't storing this directory on ntfs-3g or some other filesystem that ignores discretionary access controls.
Write a program, or use an existing program, that provides a service (a web service, a UNIX domain socket, or something) that will accept file content along with a file name for "upload". This program should then receive the entire file from the user, store it in RAM or a temporary folder until it's fully downloaded, then examine the file name and contents to make sure that files of the undesired file type throw an error, and are not placed in the directory. If the file appears fine, you can write the file contents to the desired file name in the restricted-by-group directory mentioned above.
For the "use an existing program" component of the above paragraph, I googled around a bit and was only able to find one possible solution (and it's not even very robust): Samba supports file extension blacklisting. You should know, however, that any file with any arbitrary contents may be renamed to contain any file extension, to easily bypass file extension checks. File extension checks are useful to prevent users from accidentally activating an executable file (for example; if you have an executable file uploaded as .txt, it will harmlessly open in the user's text editor, with the worst possible consequence of freezing the text editor because the file is too large; whereas, if they upload it with the extension .exe, a double-click on Windows would run the file).
But file extension checks alone can't make sure that the CONTENTS of the file are of a desired type (or not of an undesired type). For that, you'd need some kind of hook to call custom code "upon file upload", e.g. in an FTP server (I'm not aware of any FTP servers that can be extended this way, off the top of my head) -- and then call, e.g., the UNIX utility file on the results to see if it's of an undesired type. file is not bulletproof, but it's very good at recognizing the contents of a file, irrespective of its name.
Last thing I'll leave you with to muse over: The problem of disallowing undesired file contents is much larger if you look beyond the surface. For example, assume you start out with a PDF document. Now, flip one bit in that PDF document so that the file's format now violates the PDF standard. If you open this file in a "naive" PDF reader, it would fail to open due to the file format being violated. However, if you open it in a "smart" PDF reader, it may be able to automatically detect and repair the corruption! Your file type detection program may be fooled into thinking it's not even a PDF document, if the corruption is severe enough. But an end-user might still be able to open the file.
Worse still, if you are trying to suppress specific file contents or file types from being transmitted, there are unlimited numbers of ways for users to bypass this. One approach would be to deliberately corrupt the file header beyond recognition, so that your blacklist doesn't understand the file type, thus allowing it through by default; or, if you have a whitelist, to disguise the file as a valid file format of an allowed type, but then have the contents contain the actual payload. A cooperating pair of users (or an attacker with remote control over another user) could upload and then download this file, change the contents on the receiving end to the desired format, and use the data.
This gets into the field of steganography, where you use extinct stegosaurus DNA to try and determine whether a file is of a given type ;-) (just kidding; stegosauri have nothing to do with steganography :)). In steganography, a file could appear perfectly legitimate on the surface, and even be allowed by a whitelist filter; but the attacker could cooperate with another user to communicate arbitrary data (in any file format) by hiding it within the seemingly-valid data of an existing file. Steganography can be incredibly hard to detect.
However, if your intent is just to block "happy path" file types, where the file openly and blatantly declares itself of a particular type, you can use something like file on files in a staging directory that you allow uploads to over FTP, and then, if the file "checks out" based on your test, you can move it over to the restricted directory. This will work perfectly for preventing users, who are not trained in cryptography/steganography, from uploading undesired file types to your system.
| How could I restrict directory content by file type? |
1,294,409,653,000 |
Given a .jpg picture without associated GPS coordinates, how would you suggest to add custom coordinates to it?
|
You can use the exiftool command
e.g.:
exiftool -exif:gpslatitude="Put_the_GPS_coordinate_here" -exif:gpslatituderef=S your.jpg
Verify it:
exiftool -filename -gpslatitude -gpslongitude -T your.jpg
| How to add a custom GPS location to a picture? |
1,294,409,653,000 |
Sometimes it seems that the standard file command (5.04 on my Ubuntu system) is not sophisticated enough (or I am just using it wrong, which could well be).
For example when I run it on an .exe file, and I am quite positive that it contains some archive, I would expect output like this:
$ improved-file foo.exe
foo.exe: PE32 executable for MS Windows (GUI) Intel 80386 32-bit
.zip archive included (just use unzip to extract)
Other issues:
It doesn't detect concatenations of different formats
It doesn't detect common file formats, e.g. .epub, which is just a .zip container with some standardized .xml files etc. inside (file displays 'data')
An example of such a .exe file containing an archive - I guessed some archive-formats and tried the corresponding unpack-commands with a trial'n'error approach - which worked in the end - but I would rather prefer a more auto-inspection oriented workflow.
|
I can't think of an all-in-one tool, but there are programs that can cope with a large array of files of a given category.
For example, p7zip recognizes a large number of archive formats, so if you suspect that a file is an archive, try running 7z l on it.
$ 7z l ta12b563enu.exe
…
Type = Cab
Method = MSZip
…
If you suspect that a file is an image, try ImageMagick.
$ identify keyboard.jpg.gz
keyboard.jpg.gz=>/tmp/magick-XXV8aR5R JPEG 639x426 639x426+0+0 8-bit DirectClass 37.5KB 0.000u 0:00.000
For audio or video files, try mplayer -identify -frames 0.
If you find a file that file can't identify, you might make a feature request to the author of your magic library.
| More sophisticated file command for deep inspection? |
1,294,409,653,000 |
Let's assume that I want to run a shell script named test.sh at 1 AM every day. I could either use:
0 1 * * * /home/user/test.sh
Or I could use:
0 01 * * * /home/user/test.sh
For the above example, which is technically the correct answer - should a leading 0 be used in the shedule, or should just the number of the hour be entered?
|
If your cron accepts zero-filled numbers, you may use them.
Since the POSIX specification for crontab and the crontab(5) manual on all systems that I have access to only give examples without zero-filled numbers (without actually saying anything about the formatting of numbers), it may be prudent to stay with non-filled numbers if you at some point find yourself on a system where zero-filled numbers are not accepted.
There are examples of systems where 01 is the same as *, not 1:
cron job for hour=7-19 runs every hour instead
| When scheduling jobs to be run by crontab, should leading zeros be used for the hour? |
1,294,409,653,000 |
The description of the bzImage in Wikipedia is really confusing me.
The above picture is from Wikipedia, but the line next to it is:
The bzImage file is in a specific
format: It contains concatenated
bootsect.o + setup.o + misc.o +
piggy.o.
I can't find the others (misc.o and piggy.o) in the image.
I would also like to get more clarity on these object files.
The info on this post about why we can't boot a vmlinux file is also really confusing me.
Another doubt is regarding the System.map. How is it linked to the bzImage? I know it contains the symbols of vmlinux before creating bzImage. But then at the time of booting, how does bzImage get attached to the System.map?
|
Till Linux 2.6.22, bzImage contained:
bbootsect (bootsect.o):
bsetup (setup.o)
bvmlinux (head.o, misc.o, piggy.o)
Linux 2.6.23 merged bbootsect and bsetup into one (header.o).
At boot up, the kernel needs to initialize some sequences (see the header file above) which are only necessary to bring the system into a desired, usable state. At runtime, those sequences are not important anymore (so why include them into the running kernel?).
System.map stands in relation with vmlinux, bzImage is just the compressed container, out of which vmlinux gets extracted at boot time (=> bzImage doesn't really care about System.map).
Linux 2.5.39 intruduced CONFIG_KALLSYMS. If enabled, the kernel keeps it's own map of symbols (/proc/kallsyms).
System.map is primary used by user space programs like klogd and ksymoops for debugging purposes.
Where to put System.map depends on the user space programs which consults it.
ksymoops tries to get the symbol map either from /proc/ksyms or /usr/src/linux/System.map.
klogd searches in /boot/System.map, /System.map and /usr/src/linux/System.map.
Removing /boot/System.map generated no problems on a Linux system with kernel 2.6.27.19 .
| More doubts in bzImage |
1,294,409,653,000 |
Lately I hit the command that will print the TOC of a pdf file.
mutool show file.pdf outline
I'd like to use a command for the epub format with similar simplicity
of usage and nice result as the above for pdf format.
Is there something like that?
|
.epub files are .zip files containing XHTML and CSS and some other files (including images, various metadata files, and maybe an XML file called toc.ncx containing the table of contents).
The following script uses unzip -p to extract toc.ncx to stdout, pipe it through the xml2 command, then sed to extract just the text of each chapter heading.
It takes one or more filename arguments on the command line.
#! /bin/sh
# This script needs InfoZIP's unzip program
# and the xml2 tool from http://ofb.net/~egnor/xml2/
# and sed, of course.
for f in "$@" ; do
echo "$f:"
unzip -p "$f" toc.ncx |
xml2 |
sed -n -e 's:^/ncx/navMap/navPoint/navLabel/text=: :p'
echo
done
It outputs the epub's filename followed by a :, then indents each chapter title by two spaces on the following lines. For example:
book.epub:
Chapter One
Chapter Two
Chapter Three
Chapter Four
Chapter Five
book2.epub:
Chapter One
Chapter Two
Chapter Three
Chapter Four
Chapter Five
If an epub file doesn't contain a toc.ncx, you'll see output like this for that particular book:
book3.epub:
caution: filename not matched: toc.ncx
error: Extra content at the end of the document
The first error line is from unzip, the second from xml2. xml2 will also warn about other errors it finds - e.g. an improperly formatted toc.ncx file.
Note that the error messages are on stderr, while the book's filename is still on stdout.
xml2 is available pre-packaged for Debian, Ubuntu and other debian-derivatives, and probably most other Linux distros too.
For simple tasks like this (i.e. where you just want to convert XML into a line-oriented format for use with sed, awk, cut, grep, etc), xml2 is simpler and easier to use than xmlstarlet.
BTW, if you want to print the epub's title as well, change the sed script to:
sed -n -e 's:^/ncx/navMap/navPoint/navLabel/text=: :p
s!^/ncx/docTitle/text=! Title: !p'
or replace it with an awk script:
awk -F= '/(navLabel|docTitle)\/text/ {print $2}'
| Extract TOC of epub file |
1,294,409,653,000 |
How can I get the official IANA Image Media Type (if any) of a binary stream? I'd like to avoid trusting to file extensions and vague guesses when handling images. Preferably some command using common tools like ImageMagick's identify, or some programming language if necessary.
|
You could use the file command. It's available on most linux distributions by default, and you can get it for Windows via the GnuWin32 file package.
Call it with:
$ file --mime-type clock.png
clock.png: image/png
Note that it's not 100% accurate - I don't think anything can be theoretically.
If you want to do that in code, there's libmagic that provides a C api. It can process either files or in-memory buffers. (file uses that on Linux.)
| IANA image format string from binary stream |
1,294,409,653,000 |
I was wondering what are some formats of object files in Linux?
There are two types of object files that I know:
executable, which has ELF format
object files that are generated by gcc after compilation but before linkage.
what is the format of such object files?
Or are they also ELF format but with some different sub-formats than executables?
Is the job of a linker to convert the format of this type of object files into the format of executables?
Are there other types of object files?
|
Core dumps are also object files, of a sort, and usually in ELF format, too. Running this program will probably produce a file named "core":
int
main(int ac, char **av)
{
char *p = 0;
*p = 'a';
return 0;
}
My file command says:
core: ELF 32-bit LSB core file Intel 80386, version 1 (SYSV), SVR4-style, from './dump'
| Different formats of object files in Linux |
1,294,409,653,000 |
I have a bunch of files with a .zip extension that I cannot seem to extract on my HPC:
$ unzip RowlandMetaG_part1.zip
Archive: RowlandMetaG_part1.zip
warning [RowlandMetaG_part1.zip]: 13082642473 extra bytes at beginning or within zipfile
(attempting to process anyway)
error [RowlandMetaG_part1.zip]: start of central directory not found;
zipfile corrupt.
(please check that you have transferred or created the zipfile in the
appropriate BINARY mode and that you have compiled UnZip properly)
The size of the zip file itself is 17377631766 bytes.
However, when I download the file to my mac and double-click, the Archive Utility app is able to unpack the file (it contains a directory with about 200 gzipped files inside).
The place that generated the file says:
The files are simply zipped here on our local lab PC running Windows, then uploaded to Dropbox...most people don’t have any problems with them and many can directly download the links I give them using the Linux wget command directly into their servers, then unzip there (the Linux utility can usually handle PC-zipped files).
I'm not sure that the fact that the files are from dropbox is relevant, but I used curl -LO to download (also tried wget - this doesn't change anything), and the files show up with ?dl=1 at the end of the file name. That said, when I download from dropbox to my mac, unzip still fails with the same error.
My question - is there anyway to get this to unzip on the server? Some software that will accomplish the same thing that Archive Utility.app does, or some other way of determining what unzipping protocol to use?
EDIT: Based on comments: some additional information:
$ file RowlandMetaG_part1.zip
RowlandMetaG_part3.zip: Zip archive data, at least v2.0 to extract
$ zip --version
Copyright (c) 1990-2008 Info-ZIP - Type 'zip "-L"' for software license.
This is Zip 3.0 (July 5th 2008), by Info-ZIP.
Also, I did try tar, but without success.
$ tar -xvf RowlandMetaG_part1.zip
tar: This does not look like a tar archive
tar: Skipping to next header
tar: Archive contains `l@\022\t1\fjp\024uP\020' where numeric off_t value expected
tar: Archive contains `\024\311\032b\234\254\006\031' where numeric mode_t value expected
tar: Archive contains `\312\005hЈ\2138vÃ\032p' where numeric time_t value expected
# etc...
And I end up with crap in the directory like this:
$ ls
???MK??%b???mv?}??????@*??TZ?S?? ??????+??}n>,!???ӟw~?i?(??5?#?ʳ??z0?[?Ed?@?쑱??lT?d???A??T???H??
,??Y??:???'w,??+?ԌU??Wwxm???e~??ZJ]y??ˤ??4?SX?=y$Ʌ{N\?P}x~~?T?3????y?????'
|
It turns out that, because the file is so large, zip can't handle it (it maxes out at 2Gb). Instead, I can use jar:
$ jar xvf RowlandMetaG_part1.zip
inflated: RowlandMetaG_part1/296E-7-26-17-O_S23_L001_R1_001.fastq.gz
# etc...
| Unix unzip is failing but Mac Archive Utility works |
1,294,409,653,000 |
I have a local static variable, something like this:
void function(void) {
static unsigned char myVariable = 0;
...
I dump the symbol table using readelf as follows:
readelf -s myprogram.elf
and I get the symbol table, that contains myVariable as follows:
...
409: 00412668 1 NOTYPE LOCAL DEFAULT 16 myVariable.9751
...
My question is: what does the number mean after the name of the variable and the dot? And is there any detailed documentation about the output format of readelf? The man page does not contain information about the format of the symbol table, and I cannot find anything about this.
(I'm using Xilinx's ARM GNU tools, but I guess, this is kind of the same for other platforms as well)
Thanks!
|
That's not an artifact of readelf's output; myVariable.9751 is really that symbol's name. In order to distinguish static variables defined in different scopes/functions, the compiler has to decorate their names in some way:
$ cat a.c
static int var;
int foo(void){
static int var;
if(var++ > 3){ static int var; return var++; } else return var++;
}
int bar(void){ static int var; return var++; }
int baz(void){ return var++; }
$ cc -Wall -o - -S a.c | grep local.*var
.local var
.local var.1759
.local var.1760
.local var.1764
Notice that the dot (.) cannot be used in C as part of an identifier, so var.num is not going to collide with any other variable defined by the user.
As to readelf documentation, there isn't much else beyond the man page and reading the source code; but you can also use objdump -tT instead of readelf -s; maybe you'll find its man page better.
| What is the number in readelf symbol table name? |
1,294,409,653,000 |
I have a .CSV file which upon passing the file test_file.csv command gives the output as:
test_file.csv: ISO-8859 English text, with CR line terminators
When I am using cat, head or tail command on the file, it is returning me the total file content on the screen. How do I convert the line terminators so that I will be able to use these commands and use the file for further processing. Also, I was wondering if there is a way to know how this file was generated/created? Please suggest.
|
The only thing I'm aware of that commonly used a bare CR as a line terminator is old Mac systems (before Mac OS X) but unless it's a really old file that seems unlikely.
In any case the mac2unix program in the dos2unix package should be able to fix it for you.
| Unable to do head or tail for a file |
1,294,409,653,000 |
How can I convert (or open) .ost file (Microsoft Outlook email folder) on linux ?
Ideally, I would like to read it with Mutt. But mutt does not seem to understand this format. Therefore I would like to convert it into something readable such as mbox or mdir format.
Are there such conversion tools on linux ?
|
There are a few tools which help with interoperability with Outlook files:
libpff, which includes a pffexport program which can extract data from OST (and other) files;
Evolution has a PST import plugin which can handle OST files;
libpst, which can convert PST files to mbox files, but I don’t know whether it can handle OST files.
pffexport should at least allow you to view the contents of the emails.
| how to open/convert .ost file (Microsoft Outlook email folder) on linux |
1,294,409,653,000 |
I ran into a problem on busybox where my /etc/group file was not properly processed.
bash> tail /etc/group
...
onebutlast::1001:user1,user2
last::1002:user3bash>
The user3 was not in the last group according the the getgrouplist function.
Verifying the man group page:
The /etc/group file is a text file that defines the groups on the system.
There is one entry per line, with the following format:
group_name:password:GID:user_list
A hint in the right direction. But it says nothing about what a 'line' is expected to be.
Easy enough to fix. But my question is: is there some documentation/specification that specifies that the /etc/group file should have a newline as last character?
|
A "line" is by definition a string of text terminated by a newline.
By extension of this definition, a file is not a "text file" if it does not end with a newline character.
That's what POSIX says. That standard does however not care about the /etc/group file as such (the group database may be stored in any sort of database, for example in a plain text file or an LDAP server, as long as it contains at least the group name, numerical group ID, and a list of users allowed in the group). If the documentation on your system says that this file has to be a text file, then it needs to have a final terminating newline character.
| Is it ok to have no newline at end of /etc/group? |
1,294,409,653,000 |
Modem Manager GUI creates a database named sms.gdbm to store all the SMS details. Currently Modem Manager GUI is not providing a feature to simply delete all received/sent messages. So I'm trying to create a program to remove those records from its database (sms.gdbm). But first I want to know the structure of the sms.gdbm database. What are the databases it contains, also table and their column names. So are there any CLI or GUI programs to display the structure of a *.gdbm file?
|
GDBM databases are readable through the GDBM API. They are basically a way to store simple key-value pairs of any kind. There is no "structure" as in traditional DBMSes : no tables, no columns... Only keys and values.
The API defines the following functions:
GDBM_FILE gdbm_open (const char *name, int block_size, int flags, int mode, void (*fatal_func)(const char *));
void gdbm_close (GDBM_FILE dbf);
typedef struct {
char *dptr;
int dsize;
} datum;
int gdbm_store (GDBM_FILE dbf, datum key, datum content, int flag);
datum gdbm_fetch (GDBM_FILE dbf, datum key);
int gdbm_delete (GDBM_FILE dbf, datum key);
datum gdbm_firstkey (GDBM_FILE dbf);
datum datum gdbm_nextkey (GDBM_FILE dbf, datum prev);
const char * gdbm_strerror (gdbm_error errno);
Basically, all you have to do is open the file through the API...
GDBM_FILE database = gdbm_open("sms.gdbm", 512, GDBM_READER, 0, NULL);
And start reading:
#include <stdio.h>
#include <stdlib.h>
#include <gdbm.h>
int main(int argc, char** argv)
{
GDBM_FILE database = gdbm_open("sms.gdbm", 512, GDBM_READER, 0, NULL);
datum key, data;
for(key = gdbm_firstkey(database); /* get the first key */
key.dptr != NULL; /* keep going until the end */
key = gdbm_nextkey(database, key)) /* next key */
{
/* fetch data associated to key */
data = gdbm_fetch(database, key);
if(data.dptr != NULL)
printf("Entry found (%d bytes) : %s.\n", data.dsize, data.dptr);
}
gdbm_close(database);
return EXIT_SUCCESS;
}
Be careful that there is no certainty as to the type of data stored in the database. Here, I assumed there would be strings, but it could be anything. The data is stored in binary form, and the only thing you can be certain of is the size (data.dsize). The API will provide you with a pointer to the start of the data (data.dptr), but how you process it is up to you (or at least, up to your Modem Manager GUI).
Once you've found the entry you want to delete, just call gdbm_delete:
gdbm_delete(database, key);
And don't forget to close everything when you're done ;)
gdbm_close(database);
I am not aware of any GDBM reader program already available, but writing one doesn't really require too much effort. Don't forget to include the GDBM header (gdbm.h) and to link the library when compiling:
gcc reader.c -o reader -lgdbm
| How to get the structure of a GDBM database |
1,294,409,653,000 |
I created a username/password combination of the form onetwo:bucklemyshoe, ie the password file contains the single line onetwo|bucklemyshoe
Whenever I try to connect the message appears on the logon page:
You were disconnected for the following reason:
invalid challenge response
The logs display the following message
Authentication required by password file authenticator module
sending challenge for username 'onetwo' using hmac+sha256 digest
Warning: hmac+sha256 challenge for 'onetwo' does not match
Warning: authentication failed
invalid challenge response
Disconnecting client Protocol(ws websocket: 111.111.111.111:14333 <- 222.222.222.222:35555):
invalid challenge response
It makes no difference whether Insecure plain-text passwords is checked or not.
The contents of the password file is not what xpra expects. Is the actual format documented somewhere? Is there an utility or script to create them in the right format?
|
According to the xpra mailing list where this was also asked, the password file format is documented on the wiki:
Password File
"file" vs "multifile":
"file" contains a single password, the whole file is the password
"multifile" contains a list of authentication values, see proxy server file authentication - this module is deprecated in favour of "sqlite" which is easier to configure.
To make a regular password file, you just write the password in plaintext:
echo -n "bucklemyshoe" > yourpasswordfile.txt
| What is the correct format for xpra password files? |
1,294,409,653,000 |
On Linux, each software can decide the configuration format he wishes to use. Some uses TOML, INI, JSON, XML, CSV, YAML, JS, CSS, scripts, and so on.
However, some configuration files use kind of INI-like text formats, which seem non-standard, e.g. :
A text file in which each line is composed of a key and a value separated by one or many white space characters, line beginning by "#" are comments, and sometime it has some kind of "blocs" (e.g. in SSH) :
Include /etc/ssh/ssh_config.d/*.conf
Host *
# ForwardAgent no
SendEnv LANG LC_*
A variant is used (e.g. in Nginx) in which "blocs" are defined by {} :
server {
listen 127.0.0.1:80;
}
Is there any name/group of words used to designate this type/family/kind of formats ?
|
"Text-based configuration format, with weak structural hierarchy"
is what I'd call the common denominator of both examples. An nginx config file is syntactically as different to an SSH config file as it is to JSON – it might look similar on a cursory glance, but the things both parsers can do with the content of the file are sufficiently different that I wouldn't even want to throw them in the same class of syntaxes.
| Is this type of Linux configuration files format has a name or a way to designate them? |
1,294,409,653,000 |
When tried the command cat < 1.pdf it printed a very large output, which was totally incomprehensible to me. The content of 1.pdf was abc.
The output was like this:
ÀýÓëöûcÎ=ÉÐÎTaüÍ8]ö¹mg:=Rú*@H1S¢▒ùá½~Ì8u_4,¬7ïyt#¯ÚZ|åôÛ~«Æ fM²JKÁNÿ6 ì©ìÞ¾▒bT
¦åÊmBíöÖ¡÷ÄïÝM{Í1¹@;ÄqÄú t]È7DJ Êûc0£jÜÖã\0O8À±(2)èJR'Ø÷=~ÝÆÂµ¡´ oÇKÈ]¹ÞÜY)ÚwÒ?[4ò©Ió¦>G)î¾J&d}ýíÜÅÓò~Ø0 $´Në¿´Èc®pVqí+ëCppG¾ùóßeõõ6GÌ,öfú8Ô7»S[¢S50cq/_9¹jó¿·Ü%×tQSßî▒LðbkÂÒxâ£Ö▒üVAûÇamÏ·Â׫H´+ÆWíç´upèó`I]± ÎëÚwiòtçúwAhO¼²´'Æ©ëÀ0lô?¿ÌIò▒ìXË<»ÅUepçæå¥
SïÒFҽϷº®Ën.Z×´\£ÁEH@®2ÊçC¢n½¡hÑâ>º´¢YÚXEfg sôë¥*|zº7>ù!I©Åÿ«; ;&==
)dS/),÷È´:ÞõH:CÉÑÀiTÌw!u@Âp2÷AÒfµòÜtFIZ^iÿà£ùÖ5ÐsDiërÿ$0b6Ëü~xÏ·._ÏÒõÜr²`wYù;¤²å»äE3óù²ëvÇ»Ó'ãµ~?ÿîMZÍPkh{aÙ1y&tüÙòÕMoó¬²<ñ/ÇÖa?üʯuÝÓjû,¨Üå@/GMa-èGkD}¤ð©fZbYÑlt/ ±Øj¦èRhCå1âÆñ±S@ÖòÁ~e}
>NÀ^²Jà-Û[Mø¡FËB7ÉVy0|ôÉÏjx[ÙÁnneê)wã+ök'R6"dÞqît¿ý,ߢ]MöV>»Ñ@ÞwM0®èçã^F`çFÕ²æL((¬±S¢ÅïÂy§púÓË5y1pÆ{uxëÈOþ'¾7+Öº!í
uV-R²f*`æ\ías\Øl^÷ ÿ`r1|yÅ-YØ,º·¢▒ÀPæá¸EW0d¤q]&ÿdV6ß.cùÂ~´óðCß▒(¨îMëb#òEnÑ»PÅV½!ÀÈѵ c´è
jFÇé¨J$ǵÀcu?4·[ö&å:1&OÓö(øyKxòëÑq¸çÎÇÈI#5¨çû,'µÐûfG¸Í§³UÚëÎCDøõe²Ñú$Á½é½Ocø»Éßs! ÀõE²©)8½îv¿<Üî|è¶»B▒ÿYw¹·ÌÞÆ¶âôIÇ.>¾H¡n¬Éüׯ*m«¶£L£#7È?¾sÊNoXµ·àMÚ
?ó´ZìâþÌçùä½ÿ$qÀÊcOºùdewænår▒ÖB½dfÕ;t4Êe3#ÄúÀ£çP=¨QÌ▒ÕþºÑ\U¼Fµ»â¯/!NZ=>½éú©,EÉ|ªQafu,5Ý%Xw%seàØÇÇTª BZëCaßî;zÃ"Bma¤ y=ÞwÁű~ÿõåEyV/Ò%q¥Ì^Ç 2U¸âQ³1y(¾&¨òYùÆ«}üx#Á®úÅÿÆðö.i8
ïþ¨è|Âý6\ U+ᬮ[®eVéüvíÜ{ÈL+]¬)ùxþecäæº°ÿoö?,Ä:¯Oò9T:1G4qÞ.ÌtÉÑëEæáHÔ׬¡ª çc^
nÍPÑU7/ÄñcªXâ§nc]¾¨XPayÚGºxª.wÈç¤}¬ÓÏÇ\rf`¤ñ@zJnî´a'¾¨sNÔAëG½PL6ºIQkíJÍçØ¼ÔKýF¾)$\&§^» Eý¨_{tÂp¥ñT`mùPvcìÃç1ÿûKáz¹â®ò÷pר?äIIö 6²¬QªMÚIµÈTã+¤i1âN¾8ɽNww²Îf¹¿kVr²ù½Ä¼Ìå±"ªúº+äÿ¥
óv¡t5!(«:Ö+Ovl<¦aö6Kì»â2óÎ娨|üËàÇÒ.j§·¸[ãæ¿ï`¡÷¥¾©,ÝßiÝPMåoÑéïToãw¿dyçëÀã·ó6ês\ÔR;ÕXÚ»ûÿõå▒öÁ▒¡\Ðs·~=ðÈTDÝCCijÚ`¹ÎÔ¬\·ðñ_ÿü§¯$Âõj®Û¢_]Lù¦8áÌæ²»BJÖÛn¼ûXÏjY8Ò6éØí©YóZtÛt´ÌníUè¨PGØÊzý+ÚT¦M1¥e¬åxendstreamýC~¢6A¬»hå?5µÎÍbKÏÔlwæ l▒_%L;8ê8jßQüg-í× Jâ`d¬*»ö</nä"nAíÀ ÿ]©äXĦMYS▒
endobjÎ{°m-°õ1Hgîºû:h*µVØK°F8ñGÔÎl~V3ÄÞ!bÊcÞDGë¯×Yl(.ãâÝå`£=cü§ýÔb£ÄèMu Íëve«XîÝ£#"VØgáKÔ?öþ§®êϺݡ[3uש²Nµq÷Ú▒ßób¸l6=?'«ì>BÔ?t_Ñ gÁ£õ=q@ÜÕÅûªE3¶L+ÕÅ©Cå}b-7Q,ì·Túlñ¨þ¦:=`î¹aÐçeÆãÜw°¥ès
E▒ªpÇ !}¡1{¹_ZlÈë¡Á;u§·+ú,fo ä-AÏ[HM¥×▒ÌÝåìtò*9¼Â^ѧ▒aÛ`B>/Cö0Þ÷ðiNËþÊ âÄCH´/9fVÎÉó6!vóÑ@ ðÉ!w±y;¯m$i¾äµH+·]YA|åÀD!j{øEÙ^äFÖÑ4▒ääû5þµ)Ãå*y´¹Q« 7í?NýÍ'^õ(*C4f;3ûûn³i|nIï0uo>#n³yµ¹5§*É»&Gtê;c.9 0 objéðÜ}zÔ22T`¦E'ýX®WÈô»&Â>9=ay$àÊGWdwÂ!f·¹eMvÖ=EÞߢ¯ò^¢n`ZÜöQ!Yß§µã gÚEbØù»ÑñÓ 1ªAäØÿPâ'4RÅU]xý'¬¡Â>¹æîtê3Yêy.·¬4ÖçæÍÕOß®×ñh¶ap(<</Type/Font/Subtype/Type1/BaseFont/NimbusRomNo9L-Regu 9îî~ýÚK°ÓÑ*ÈTt÷ ØL
/ToUnicode 8 0 R} Åta°Àj) _ Kû'Üd§éËpôKÜ~¯
/FirstChar 0 /LastChar 255ºP!y%µRÕÖ×bðó°~®_ñA=ùjÒÜW!þy0Æ¢]ìMºõ$ÊÍD96)éàjM[îÍÙù»@y»;«!BÌaÓ;²À ÏÞî¨ZÚ8Ýà ìÏ?å²@ÙÏû¬W$O9²ößÄé«¶Âv(r·?,½ø?u«¬§ýéøZÍñÉÆSêÒfæÿ ÕÀb8ÇxØÝ¯¹ÅAýöµiº\ÉI$▒À}0@bâÚÕq9s'XÝ/Widths[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0®ã¥Vø![
250 333 408 500 500 833 778 333 333 333 500 564 250 333 250 278Õ¶~~Yö*Ó}+«▒rl¥z«° :¬Î>2y®GmÀúÀ
500 500 500 500 500 500 500 500 500 500 278 278 564 564 564 444
921 722 667 667 722 611 556 722 722 333 389 722 611 889 722 722
556 722 667 556 611 722 722 944 722 722 611 333 278 333 469 500e$<Ìßf¼p騸ag#au.ÁÄè6Ý▒
333 444 500 444 500 444 333 500 500 278 278 500 278 778 500 500
500 500 333 389 278 500 500 722 500 500 444 480 200 480 541 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0/NimbusRomNo9L-Regu
0 333 500 500 167 500 500 500 500 180 444 500 333 333 556 556
0 500 500 500 250 0 453 350 333 444 444 500 1000 1000 0 444
0 333 333 333 333 333 333 333 333 0 333 333 0 333 333 333
1000 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 889 0 276 0 0 0 0 611 722 889 310 0 0 0 0
0 667 0 0 0 278 0 0 278 500 722 500 0 0 0 0
Why can't `cat' read content of pdf files?
|
If you call cat on a file containing a text in Chinese¹, it won't print out an English translation. With computer formats, it's the same thing: if you call cat on a file containing data in a certain format, it won't translate it to another format such as plain text. That's not its job: its job is to copy its input to its output without modifying it.
A PDF file isn't a text file. A PDF file can contain text, along with formatting instructions, images, hyperlinks, etc. If you want to read the text in a PDF file, you need to use a tool that understands the PDF file format.
There are a few some recognizable bits in the PDF file: NimbusRomNo9L suggests that the text is written in a Nimbus Roman font. This isn't one of the few fonts that all PDF viewers and printers must have, so it had to be embedded in the PDF file. The text itself (abc) isn't buried in the output because it's compressed.
A common tool to view files regardless of what format they're in is xdg-open. On Debian and derivatives, see is an alternative. Both work by guessing the file format from the extension of the file name and calling an appropriate application. If you want to explicitly extract the text parts (and forget about other information such as images, fonts, the location of the text on the page, etc.), you can call a program to convert the PDF file into text, such as pdftotext.
¹ If you understand Chinese, substitue Georgian, or Kanada, or Cree, or whatever language you don't speak.
| Why 'cat' can't read content of pdf files? |
1,294,409,653,000 |
Please let me know if below two statements are correct or not:
Folder /usr/share/mime/magic has a database/table that will give me what are the current possible file formats (outputs that I can get when I type the file command and follow it by a file name).
Whenever the file command output contains the word "text" it refers to something that you can read with a text viewer, and anything without "text" is some kind of binary.
|
Folder /usr/share/mime/magic has a database/table that will give me what are the current possible file formats (outputs that I can get when I type "file" command and follow it by a file).
Correct except that /usr/share/mime/magic is not the directory that file uses: this file is only used for the MIME type database.
From file's manpage: "The information identifying these files is read rom the compiled magic file /usr/share/file/misc/magic.mgc, or the files in the directory /usr/share/file/misc/magic if the compiled file does not exist."
And in fact, in my Arch Linux system, that file belongs to the file package.
Whenever "file" command output contains the word "text" it refers to something that you can read with a text viewer, and anything without "text" is some kind of binary.
Looks correct (I tried to find a counterexample but was unable to).
| File command database and identifying text files |
1,294,409,653,000 |
ELF 'Executable and Linkable Format' So if I generate Shared Object files .so are those considered ELF files?
|
Yes, if you generate them on linux for native use. You can see this via file:
> file mylib.so
mylib.so: ELF 64-bit LSB shared object [...]
| Are .so files in Fedora considered ELF files? |
1,409,882,382,000 |
I have a directory containing a large number of files. I want to delete all files except for file.txt . How do I do this?
There are too many files to remove the unwanted ones individually and their names are too diverse to use * to remove them all except this one file.
Someone suggested using
rm !(file.txt)
But it doesn't work. It returns:
Badly placed ()'s
My OS is Scientific Linux 6.
Any ideas?
|
POSIXly (without -delete, -mindepth etc.):
find . ! -name 'file.txt' -type f -exec rm -f {} +
will remove all regular files (recursively, including hidden ones) except any files called file.txt.
To remove directories, change -type f to -type d and add -r option to rm; care must be taken to avoid deleting any parent directory of the directory you want to keep (add at least ! -name .).
In bash, to use rm -- !(file.txt), you must enable extglob which enables a subset of ksh's extended glob operators including the !(...) negation one:
$ shopt -s extglob
$ rm -- !(file.txt)
(or calling bash -O extglob)
Also enable the dotglob one to also remove hidden files.
With rm -- !(file.txt), since all the matching files are passed at once to rm, that can cause an Argument list too long error.
find's -exec ... {} + works around that by running rm as many times as necessary.
In ksh93, you can do the same with:
command -x rm -- !(file.txt)
In zsh, you can use ^ to negate pattern with extendedglob enabled:
$ setopt extendedglob
$ rm -- ^file.txt
or using the same syntax with ksh and bash with options ksh_glob and no_bare_glob_qual enabled (or emulate ksh for an even closer emulation of ksh's behaviour).
To work around the too many arguments issue, you can use its zargs autoloadable function.
To remove hidden files, add the D glob qualifier (or enable dotglob like in bash, but that would affect all globs like in bash). To remove only regular files, add the . qualifier. And use **/ for recursive globbing. So an equivalent of the first find command would look like:
autoload zargs
zargs ./**/*(D.oN) -- rm -f
(adding oN to skip sorting for an even closer match).
| Remove all files/directories except for one file |
1,409,882,382,000 |
I want to see what files will be deleted when performing an rm in linux. Most commands seem to have a dry run option to show just such information, but I can't seem to find such an option for rm. Is this even possible?
|
Say you want to run:
rm -- *.txt
You can just run:
echo rm -- *.txt
or even just:
echo *.txt
to see what files rm would delete, because it's the shell expanding the *.txt, not rm.
The only time this won't help you is for rm -r.
If you want to remove files and directories recursively, then you could use find instead of rm -r, e.g.
find . -depth -name "*.txt" -print
then if it does what you want, change the -print to -delete:
find . -depth -name "*.txt" -delete
(-delete implies -depth, we're still adding it as a reminder as recommended by the GNU find manual).
| How do you do a dry run of rm to see what files will be deleted? |
1,409,882,382,000 |
Can I completely disable "Recently Used" feature in GTK's file / directory selector?
Sometimes programs default to this but since it's not useful in my work-flow and with the way I organize my files, it only adds confusion:
I usually just expect to start from my $HOME, so I get surprised
by the list of folders
Also in case of file saving, I'm annoyed by the fact that you can't just
type in the name and hit Enter--you have to type a path or select one
I'm using Xfce 4.8 on Debian Wheezy and this feature was not available in older Xfce (in Squeeze). I found a post on Xfce-users' mailing list regarding this feature, but without any useful output.
Is it possible to simply turn this off and default to $HOME?
|
@MartinVegter
There is a file ~/.config/gtk-2.0/gtkfilechooser.ini. It should look like Stefano wrote:
[Filechooser settings]
LocationMode=path-bar
ShowHidden=false
ShowSizeColumn=true
GeometryX=377
GeometryY=132
GeometryWidth=612
GeometryHeight=528
SortColumn=name
SortOrder=ascending
StartupMode=recent
There was no DefaultFolder variable in this file, but I found the StartupMode=recent var which I changed to StartupMode=cwd. This only works on GTK 2 applications like mousepad.
I don't know there is settings file for GTK 3 apps like gedit, but it seems that GTK 3 filechooser already sets the location to the current folder by default.
| Disable "Recently used" in GTK file/directory selector |
1,409,882,382,000 |
I have several utility programs that do not have their own directory and are just a single executable. Typically I put the binary in /usr/local/bin. A problem I have is how to manage preference settings.
One idea is to use environment variables and require the user to define such variables, for example, in their bash.rc. I am a little reluctant, however, to clutter up the bash.rc with miscellaneous preference settings for a minor program.
Is there a Standard (or standard recommendation), that defines some place or method that is appropriate for storing preferences for small utility programs that do not have their own directory?
|
Small utilities for interactive desktop use would be expected to follow the XDG Base Directory Specification and keep their config files under
$XDG_CONFIG_HOME
or (if that is empty or unset) default to
$HOME/.config
The picture is a little less clear for non-GUI tools, since they might run on systems which are headless or which don't otherwise adhere to XDG/freedesktop standards.
However, there's no obvious drawback to using $XDG_CONFIG_HOME if set or $HOME/.config if not, and it should be relatively unsurprising everywhere.
| Where should small utility programs store their preferences? |
1,409,882,382,000 |
I have two backup directories (dir1 and 2) on two different (local) HDDs and I want to create one of them. How can I really sync their contents so that both directories will have the same contents?
|
To sync the contents of dir1 to dir2 on the same system, type:
rsync -av --progress --delete dir1/ dir2
-a, --archive
archive mode
--delete
delete extraneous files from dest dirs
-v, --verbose
Verbose mode (increase verbosity)
--progress
show progress during transfer
— from rsync(1)
Note : The / after dir1 is necessary to mean "the contents of dir1".
Without the trailing slash, would place dir1, including the directory, within dir2. This would create a hierarchy that looks like:
…/dir2/dir1/[files]
| How can I sync two local directories? [duplicate] |
1,409,882,382,000 |
Usually, I unarchive things by $ mkdir newFolder; $ mv *.zip newFolder; $ cd newFolder; $unzip *.zip but sometimes I get lazy and just do in an arbitrary folder $ unzip *.zip so time-to-time messing up with other content. I will list here some methods -- some archive version surely have crappy-flags while others more spartan, I am more interested about the latter.
Some ways to de-unarchive, are there others?
$ find . -anewer fileThatExistedBeforeUnarchieving -ok rm '{}' \; Weaknesses are that it lists the *.zip dirs, so you need to use slow -ok, slow with many *.zip matches and, for some reason, it does not seem to match everything extracted.
If small amount of extracted files, one-by-one, slow, cumbersome and error-prone.
When I want to make sure whether the content of the archieve is actually a folder, I sometimes check it with $ unzip -l *.bsd, works at least in obsd`s unzip-version.
If you are referring to certain archiving tools, please, state them when appropriate. Keep it simple though -- I am more interested about the WAYS how you do it, rather than a single tool.
|
By name
You can generate the list of files in the archive and delete them, though this is annoyingly fiddly with archivers such as unzip or 7z that don't have an option to generate a plain list of file names. Even with tar, this assumes there are no newlines in file names.
tar tf foo.tar | while read -r file; do rm -- "$file" done
unzip -l foo.zip | awk '
p && /^ --/ {p=2}
p==1 {print substr($0, 29)}
/^ --/ {++p}
' | while …
unzip -l foo.zip | tail -n +4 | head -n -2 | while … # GNU coreutils only
7z l -slt foo.zip | sed -n 's/^Path = //p' | while … # works on tar.*, zip, 7z and more
Instead of removing the files, you could move them to their intended destination.
tar tf foo.tar | while read -r file; do
if [ -d "$file" ]; then continue; fi
mkdir -p "/intended/destination/${file%/*}"
mv -- "$file" "/intended/destination/$file"
done
Using FUSE
Instead of depending on external tools, you can (on most unices) use FUSE to manipulate archives using ordinary filesystem commands.
You can use Fuse-zip to peek into a zip, extract it with cp, list its contents with find, etc.
mkdir /tmp/foo.d
fuse-zip foo.zip /tmp/foo.d
## Remove the files that were extracted mistakenly (GNU/BSD find)
(cd /tmp/foo.d && find . \! -type d -print0) | xargs -0 rm
## Remove the files that were extracted mistakenly (zsh)
rm /tmp/foo.d/**(:"s~/tmp/foo.d/~~"^/)
## Extract the contents where you really want them
cp -Rp /tmp/foo.d /intended/destination
fusermount -u foo.d
rmdir foo.d
AVFS creates a view of your entire directory hierarchy where all archives have an associated directory (same name with # tacked on at the end) that appears to hold the archive content.
mountavfs
## Remove the files that were extracted mistakenly (GNU/BSD find)
(cd ~/.avfs/"$PWD/foo.zip#" && find . \! -type d -print0) | xargs -0 rm
## Remove the files that were extracted mistakenly (zsh)
rm ~/.avfs/$PWD/foo.zip\#/**/*(:"s~$HOME/.avfs/$PWD/foo.zip#~~"^/)
## Extract the contents where you really want them
cp -Rp ~/.avfs/"$PWD/foo.zip#" /intended/destination
umountavfs
By date
Assuming there hasn't been other any activity in the same hierarchy than your extraction, you can tell the extracted files by their recent ctime. If you just created or moved the zip file, you can use it as a cutoff; otherwise use ls -lctr to determine a suitable cutoff time. If you want to make sure not to remove the zips, there's no reason to do any manual approval: find is perfectly capable of excluding them. Here are example commands using zsh or find; note that the -cmin and -cnewer primaries are not in POSIX but exist on Linux (and other systems with GNU find), *BSD and OSX.
find . \! -name '*.zip' -type f -cmin -5 -exec rm {} + # extracted <5 min ago
rm **/*~*.zip(.cm-6) # zsh, extracted ≤5 min ago
find . -type f -cnewer foo.zip -exec rm {} + # created or moved after foo.zip
With GNU find, FreeBSD and OSX, another way to specify the cutoff time is to create a file and use touch to set its mtime to the cutoff time.
touch -d … cutoff
find . -type f -newercm cutoff -delete
Instead of removing the files, you could move them to their intended destination. Here's a way with GNU/*BSD/OSX find, creating directories in the destination as needed.
find . \! -name . -cmin -5 -type f -exec sh -c '
for x; do
mkdir -p "$0/${x%/*}"
mv "$x" "$0/$x"
done
' /intended/destination {} +
Zsh equivalent (almost: this one reproduces the entire directory hierarchy, not just the directories that will contain files):
autoload zmv
mkdir -p ./**/*(/cm-3:s"|.|/intended/destination|")
zmv -Q '(**/)(*)(.cm-3)' /intended/destination/'$1$2'
Warning, I haven't tested most of the commands in this answer. Always review the list of files before removing (run echo first, then rm if it's ok).
| How to de-unzip, de-tar -xvf -- de-unarchive in a messy folder? |
1,409,882,382,000 |
Is there a nice alternative for this? I always use
du -shc *
to check the size of all files and folders in the current directory. But it would be nice to have a colored and nicely formatted view (for example like dfc for viewing the sizes of partitions).
|
This is not coloured, but also really nicely ordered by size and visualized:
ncdu - NCurses Disk Usage
apt-get install ncdu
SYNOPSIS
ncdu [options] dir
DESCRIPTION
ncdu (NCurses Disk Usage) is a curses-based version of the well-known 'du', and provides a fast way to see
what directories are using your disk space.
Output looks like this:
ncdu 1.10 ~ Use the arrow keys to navigate, press ? for help
--- /var/www/freifunk -------------------------------------------------------------
470,7MiB [##########] /firmware
240,8MiB [##### ] /ffki-firmware
157,9MiB [### ] /gluon-alfred-vis
102,6MiB [## ] chaosradio_162.mp3
100,2MiB [## ] /ffki-startseite
99,6MiB [## ] /ffki-startseite-origin
72,3MiB [# ] /startseite
66,2MiB [# ] /metameute-startseite
35,2MiB [ ] /startseite_site
11,9MiB [ ] /jungebuehne
ncdu is nice, cause you can install it via apt on debian. Only colors would be cool and an export function that does not use the whole screen.
gt5 - a diff-capable 'du-browser'
gt5 looks quite the same, and there are some colors, but they have no meaning (only all files and folders are green). gt5 is also available via apt:
sudo apt-get install gt5
| Alternative command for coloured viewing the size of all files and folders |
1,409,882,382,000 |
I have a large music collection stored on my hard drive; and browsing through it, I found that I have a lot of duplicate files in some album directories. Usually the duplicates exist alongside the original in the same directory.
Usually the format is filename.mp3 and duplicate file is filename 1.mp3. Sometimes there may be more than one duplicate file, and I have no idea if there are duplicate files across folders (for example duplicates of album directories).
Is there any way I can scan for these duplicate files (for example by comparing filesize, or comparing the entire files to check if they are identical), review the results, and then delete the duplicates? The ones that have a longer name, or the ones that have a more recent modified/created date would usually be the targets of deletion.
Is there a program out there that can do this on Linux?
|
There is such a program, and it's called rdfind:
SYNOPSIS
rdfind [ options ] directory1 | file1 [ directory2 | file2 ] ...
DESCRIPTION
rdfind finds duplicate files across and/or within several directories.
It calculates checksum only if necessary. rdfind runs in O(Nlog(N))
time with N being the number of files.
If two (or more) equal files are found, the program decides which of
them is the original and the rest are considered duplicates. This is
done by ranking the files to each other and deciding which has the
highest rank. See section RANKING for details.
It can delete the duplicates, or replace them with symbolic or hard links.
| Search and Delete duplicate files with different names |
1,409,882,382,000 |
I have a daily backups named like this:
yyyymmddhhmm.zip // pattern
201503200100.zip // backup from 20. 3. 2015 1:00
I'm trying to create a script that deletes all backups older than 3 days. The script should be also able to delete all other files in the folder not matching the pattern (but there would be a switch for that in the script to disable this).
To determine the file age I don't want to use backups timestamps as other programs also manipulate with the files and it can be tampered.
With the help of: Remove files older than 5 days in UNIX (date in file name, not timestamp)
I got:
#!/bin/bash
DELETE_OTHERS=yes
BACKUPS_PATH=/mnt/\!ARCHIVE/\!backups/
THRESHOLD=$(date -d "3 days ago" +%Y%m%d%H%M)
ls -1 ${BACKUPS_PATH}????????????.zip |
while read A DATE B FILE
do
[[ $DATE -le $THRESHOLD ]] && rm -v $BACKUPS_PATH$FILE
done
if [ $DELETE_OTHERS == "yes" ]; then
rm ${BACKUPS_PATH}*.* // but I don't know how to not-delete the files matching pattern
fi
But it keeps saying:
rm: missing operand
Where is the problem and how to complete the script?
|
The first problem in your code is that you are parsing ls. This means it will break very easily, if you have any spaces in your file or directory names for example. You should use shell globbing or find instead.
A bigger problem is that you are not reading the data correctly. Your code:
ls -1 | while read A DATE B FILE
will never populate $FILE. The output of ls -1 is just a list of filenames so, unless those file names contain whitespace, only the first of the 4 variables you give to read will be populated.
Here's a working version of your script:
#!/usr/bin/env bash
DELETE_OTHERS=yes
BACKUPS_PATH=/mnt/\!ARCHIVE/\!backups
THRESHOLD=$(date -d "3 days ago" +%Y%m%d%H%M)
## Find all files in $BACKUPS_PATH. The -type f means only files
## and the -maxdepth 1 ensures that any files in subdirectories are
## not included. Combined with -print0 (separate file names with \0),
## IFS= (don't break on whitespace), "-d ''" (records end on '\0') , it can
## deal with all file names.
find ${BACKUPS_PATH} -maxdepth 1 -type f -print0 | while IFS= read -d '' -r file
do
## Does this file name match the pattern (13 digits, then .zip)?
if [[ "$(basename "$file")" =~ ^[0-9]{12}.zip$ ]]
then
## Delete the file if it's older than the $THR
[ "$(basename "$file" .zip)" -le "$THRESHOLD" ] && rm -v -- "$file"
else
## If the file does not match the pattern, delete if
## DELETE_OTHERS is set to "yes"
[ $DELETE_OTHERS == "yes" ] && rm -v -- "$file"
fi
done
| How to delete old backups based on a date in file name? |
1,409,882,382,000 |
Is there a way to make git handle a symlink as if it was a file. If I just normally add a symlink like git add symlink, git just stores/controlls the path to the file, not the linked file.
Is it possible to make git handle a symlink as if it was the linked file itself?
|
Sounds like you want a hard link
ln sourcefile /some/git/repo/targetfile
Only any good if the source and target locations are within the same file system. Otherwise you’ll have to settle for a copy or a symlink.
A symlink is a reference to a file.
A hard link is another name for an existing inode.
There are numerous resources on the web that explain in more detail.
I’m not using a special syntax to do with hard links. The man page for ln is a good place to look for the variations on syntax available.
| Git - handle symlink as if it was a file |
1,409,882,382,000 |
I have a file named 'sourceZip.zip'
This file ('sourceZip.zip') contains two files:
'textFile.txt'
'binFile.bin'
I also have a file named 'targetZip.zip'
This file ('targetZip.zip') contains one file:
'jpgFile.jpg'
In linux, what bash command shall I use to copy both files ('textFile.txt', 'binFile.bin') from the source archive ('sourceZip.zip') straight into the second archive ('targetZip.zip'), so that at the end of the process, the second archive ('targetZip.zip') will include all three files?
(ideally, this would be done in one command, using 'zip' or 'unzip')
|
Using the usual command-line zip tool, I don't think you can avoid separate extraction and update commands.
source_zip=$PWD/sourceZip.zip
target_zip=$PWD/targetZip.zip
temp_dir=$(mktemp -dt)
( cd "$temp_dir"
unzip "$source_zip"
zip -g "$targetZip" .
# or if you want just the two files: zip -g "$targetZip" textFile.txt binFile.bin
)
rm -rf "$temp_dir"
There are other languages with more convenient zip file manipulation libraries. For example, Perl with Archive::Zip. Error checking omitted.
use Archive::Zip;
my $source_zip = Archive::Zip->new("sourceZip.zip");
my $target_zip = Archive::Zip->new("targetZip.zip");
for my $member ($source_zip->members()) {
# or (map {$source_zip->memberNamed($_)} ("textFile.txt", "binFile.bin"))
$target_zip->addMember($member);
}
$target_zip->overwrite();
Another way is to mount the zip files as directories. Mounting either of the zip files is enough, you can use zip or unzip on the other side. Avfs provides read-only support for many archive formats.
mountavfs
target_zip=$PWD/targetZip.zip
(cd "$HOME/.avfs$PWD/sourceZip.zip#" &&
zip -g "$target_zip" .) # or list the files, as above
umountavfs
Fuse-zip provides read-write access to zip archives, so you can copy the files with cp.
source_dir=$(mktemp -dt)
target_dir=$(mktemp -dt)
fuse-zip sourceZip.zip "$source_dir"
fuse-zip targetZip.zip "$target_dir"
cp -Rp "$source_dir/." "$target_dir" # or list the files, as above
fusermount -u "$source_dir"
fusermount -u "$target_dir"
rmdir "$source_dir" "$target_dir"
Warning: I typed these scripts directly in my browser. Use at your own risk.
| Copy a File From One Zip to Another? |
1,409,882,382,000 |
I want to put a script in cronjob which will run in a particular time and if the file count is more than 60, it will delete oldest files from that folder. Last In First Out.
I have tried,
#!/bin/ksh
for dir in /home/DABA_BACKUP
do
cd $dir
count_files=`ls -lrt | wc -l`
if [ $count_files -gt 60 ];
then
todelete=$(($count_files-60))
for part in `ls -1rt`
do
if [ $todelete -gt 0 ]
then
rm -rf $part
todelete=$(($todelete-1))
fi
done
fi
done
These are all backup files which are saved daily and named backup_$date.
Is this ok?
|
No, for one thing it will break on filenames containing newlines. It is also more complex than necessary and has all the dangers of parsing ls.
A better version would be (using GNU tools):
#!/bin/ksh
for dir in /home/DABA_BACKUP/*
do
## Get the file names and sort them by their
## modification time
files=( "$dir"/* );
## Are there more than 60?
extras=$(( ${#files[@]} - 60 ))
if [ "$extras" -gt 0 ]
then
## If there are more than 60, remove the first
## files until only 60 are left. We use ls to sort
## by modification date and get the inodes only and
## pass the inodes to GNU find which deletes them
find dir1/ -maxdepth 1 \( -inum 0 $(\ls -1iqtr dir1/ | grep -o '^ *[0-9]*' |
head -n "$extras" | sed 's/^/-o -inum /;' ) \) -delete
fi
done
Note that this assumes that all files are on the same filesystem and can give unexpected results (such as deleting wrong files) if they are not. It also won't work well if there are multiple hardlinks pointing to the same inode.
| How to delete files from a folder which have more than 60 files in unix? |
1,409,882,382,000 |
I have a directory with over 400 images. Most of them are corrupt. I identified the good ones. They are listed in a text file (there're 100+ of them). How can I move them all at once to another directory on BASH?
|
There are several ways to do this that come to mind immediately:
Using a while-loop
Using xargs
Using rsync
Suppose the file names are listed (one per line) in files.txt and we want to move them from the subdirectory source/ to the subdirectory target.
The while-loop could look something like this:
while read filename; do mv source/${filename} target/; done < files.txt
The xargs command could look something like this:
cat files.txt | xargs -n 1 -d'\n' -I {} mv source/{} target/
And the rsync command could look something like this:
rsync -av --remove-source-files --files-from=files.txt source/ target/
It might be worthwhile to create a sandbox to experiment with and test out each approach, e.g.:
# Create a sandbox directory
mkdir -p /tmp/sandbox
# Create file containing the list of filenames to be moved
for filename in file{001..100}.dat; do basename ${filename}; done >> /tmp/sandbox/files.txt
# Create a source directory (to move files from)
mkdir -p /tmp/sandbox/source
# Populate the source directory (with 100 empty files)
touch /tmp/sandbox/source/file{001..100}.dat
# Create a target directory (to move files to)
mkdir -p /tmp/sandbox/target
# Move the files from the source directory to the target directory
rsync -av --remove-source-files --files-from=/tmp/sandbox/files.txt /tmp/sandbox/source/ /tmp/sandbox/target/
| How to move files specified in a text file to another directory on BASH? [duplicate] |
1,409,882,382,000 |
Suppose that I have a file called temp.txt. Using the cat program, I would like to add the contents of this file to the end of myfile.txt -- creating myfile.txt if it does not exist and appending to it if it does.
I am considering these possibilities:
cat temp.txt > myfile.txt
or
cat temp.txt >> myfile.txt
Both commands appear to work as I want. So, my question is, what is the difference between > and >>? Thanks for your time.
|
> writes to a file, overwriting any existing contents. >> appends to a file.
From man bash:
Redirecting Output
Redirection of output causes the file whose name results from the
expansion of word to be opened for writing on file descriptor n, or
the standard output (file descriptor 1) if n is not specified. If the
file does not exist it is created; if it does exist it is truncated to
zero size.
The general format for redirecting output is:
[n]>word
If the redirection operator is >, and the noclobber option to the set builtin has been enabled, the redirection will fail
if the file whose name results from the expansion of word exists and
is a regular file. If the redirection operator is >|, or the
redirection operator is > and the noclobber option to the set builtin
command is not enabled, the redirection is attempted even if the file
named by word exists.
Appending Redirected Output
Redirection of output in this fashion causes the file whose name
results from the expansion of word to be opened for appending on file
descriptor n, or the standard output (file descriptor 1) if n is not
specified. If the file does not exist it is created.
The general format for appending output is:
[n]>>word
| What is the difference between > and >> (especially as it relates to use with the cat program)? [duplicate] |
1,409,882,382,000 |
I would like to concatenate multiple files following a specific order from an other file.
I have multiple files called freq_<something> that I want to concatenate.
The "something" are listed in another file called "list".
So here is my list:
$ cat list
003137F
002980F
002993F
I want to do:
cat freq_003137F freq_002980F freq_002993F > freq_all
But my list contains hundreds of values so I can't really do that!
What is a way to automate it? I thought I could append a file with a while read line but it fails...
Thanks!
M
|
You can do it with a while, why not? This should work:
while read suffix; do cat freq_"${suffix}"; done < list > freq_all; done
Alternatively, you can generate the command with printf and run it manually:
$ echo "cat $(printf 'freq_%s ' $(cat list)) > freq_out"
cat freq_003137F freq_002980F freq_002993F > freq_out
Or simply execute it directly:
cat $(printf 'freq_%s ' $(cat list)) > freq_out
| Concatenate files using a specific order based on another file |
1,409,882,382,000 |
On my Gentoo box running without a desktop environment, every time I try to open a file the system tries to open it with Firefox. I understand that without a desktop environment "open file" cannot function correctly, but why is everything opened with Firefox? If I want to, can I change it to something else?
|
Usually this information is handled in 2 places:
Application .desktop files advertise what program can open
MIME database which specifies what application should be used to open file with specific MIME
The MIME database can be customized by editing .local/share/applications/mimeapps.list and .local/share/applications/mimeinfo.cache. The tool which allows to edit it is xdg-mime from x11-misc/xdg-utils. Please beware that man states it can be used only from inside desktop session.
Edit Please note that some applications, like Firefox, do not respect the MIME database settings.
| Gentoo default application for opening files |
1,409,882,382,000 |
I have a set of files with prefix, say "pre_", on a Linux machine and I just want to rename all of these files by removing that. Here is the perl code I wrote it doesn't throw any errors, but the work is not done.
#!/usr/bin/perl -w
my @files = `ls -1 | grep -i \"pre_.*\"`;
foreach $file ( @files )
{
my @names = split(/pre_/, $file);
my $var1 = $names[1];
'mv "$file" "$var1"';
}
|
You aren't executing anything. When you put something in single quotes (''), that's just a string, so 'mv "$file" "$var1"' is not a command, it's a string and doesn't do anything. A string by itself is just a true statement. In order to execute the command, you need to use:
system "mv", "--", $file, $var1;
(Not system("mv '$file' '$var1'") which would introduce a command injection vulnerability).
But there's no need for that, perl has a rename() function, so you can do:
rename($file, $var1)
Next, you really shouldn't parse the output of ls, that is a very bad idea and fragile. It is also completely unnecessary in Perl (or any other programming language). If you want to match all files in the current directory containing the string pre_ (which is what your grep does—you don't need the grep, by the way, you could have just used ls -d -- *pre_*), you can use glob() like this:
my @files = glob("*pre_*");
So, putting all that together, this is what you were trying to do:
#!/usr/bin/perl -w
use File::Glob qw(:globally :nocase);
my @files = glob("*pre_*");
foreach $file ( @files )
{
my @names = split(/pre_/i, $file);
rename($file, $names[1]);
}
But this isn't really a good idea. If you have two files with the same prefix (e.g. pre_file1 and apre_file1), then the first file will be overwritten by the second file since, in this example, both files would be renamed to file1 because your command would remove pre_ and everything before it. So you can add a warning and skip those files:
#!/usr/bin/perl -w
use File::Glob qw(:globally :nocase);
my @files = glob("*pre_*");
foreach $file ( @files )
{
my @names = split(/pre_/i, $file);
if (-e $names[1]) {
warn "Warning: Not renaming '$file' to '$names[1]' " .
"because '$names[1]' exists.\n";
next;
}
rename($file, $names[1]) or
warn "rename '$file': $!\n";
}
Of course, all of this is reinventing the wheel. You can just install perl-rename (on Debian, Ubuntu, etc. run apt install rename, on other systems it might be called prename or perl-rename), and then you can do (although this is not case insensitive, it only finds files containing pre_):
rename -n 's/.*?pre_//si' ./*
The -n causes rename to just print what it would do, without actually renaming. Once you are sure the renaming works as expected, run the command again without the -n to actually rename the files.
Here the shell passes all (non-hidden) files to rename and rename will ignore the files whose name the perl code doesn't change. You could also tell the shell to only pass the file names that contain pre_ case-insensitively with ./~(i)*pre_* in ksh93, ./(#i)*pre_* in zsh -o extendedglob, setting the nocaseglob option globally in bash, zsh or yash or use ./*[pP][rR][eE]_*.
| Renaming files in Linux using perl scripting |
1,409,882,382,000 |
Well to put it simply, I have duplicate files in a folder, with this form:
file.ext
file(1).ext
file(2).ext
file(3).ext
otherfile.ext
otherfile(1).ext
otherfile(2).ext
...
I want to move only file.ext and otherfile.ext to another folder. Is it possible to do it in bash?
I thought that maybe awk would be helpful?
|
In bash:
shopt -s extglob # activates extended pattern matching features
mv !(*\(+([0-9])\)).ext /path/to/target/
The regular expression matches all files, that don't end with (n).ext, where n is one or more numbers: +([0-9]).
You can check it with echo:
echo !(*\(+([0-9])\)).ext
Prints:
file.ext otherfile.ext
| Move unique files from a folder with duplicates files |
1,409,882,382,000 |
I've tried using answers given on here, and it doesn't seem to be working. Below are the commands I tried to remove all files with the index.php prefix in this directory on my CentOS system. The first two seem to have run but didn't do anything?
$ find . -prune -name 'index.php.*' -exec rm {} +
$ find . -prune -name 'index*' -exec rm {} +
$ rm index.php*
-bash: /usr/bin/rm: Argument list too long
|
Lets assume we have this test data set of test files:
$ tree
.
├── index.php
├── index.php.bar
├── index.php.foo
├── keppme.php
└── level1
├── index.php
├── index.php.l1
├── keepme.php
└── level2
├── index.php
├── index.php.foo
└── keepme.php
Delete all files starting with index.php:
$ find . -type f -name 'index.php*' -delete
Then test files looks like:
$ tree
.
├── keppme.php
└── level1
├── keepme.php
└── level2
└── keepme.php
Delete those with something added after .php extension (like lindex.php.foo) but keep index.php:
$ find . -type f -name 'index.php.*' -delete
Then test data shows:
$ tree
.
├── index.php
├── keppme.php
└── level1
├── index.php
├── keepme.php
└── level2
├── index.php
└── keepme.php
Instead using -delete option you can also choose xargs to delete files in parallel.
Sometimes for big file collection to delete this can speedup whole process but not always.
Run rm command on every core/cpu with max 100 files per rm invocation:
$ find . -type f -name 'index.php.*' -print0 | xargs -r0 -P $(nproc) -n 100 rm
| Removing multiple files with same prefix (argument list too long) |
1,409,882,382,000 |
I am backing up files, and I have a lot of files duplicated in multiple locations. I've used fdupes to find duplicates, but I'm actually looking for some sort of inverse of this tool.
I want to see if dir A and its sub directories contain any file that dir B does not contain. I'd like to see a list of files, if that would be possible, based on the contents of the file (comparing file size and hash).
Does any such tool already exist? (Or am I even approaching this completely wrong)
|
You could try:
diff --brief -r dir1/ dir2/ > logoutputtoafile.log
Remove --brief if you wish more detail.
| Find unique files between two directories (recursively) |
1,409,882,382,000 |
Rookie question. Following this answer Move last part of filename to front, I'm trying to do the same, except all files in my case contains square brackets.
What I want is to move the title to the other side of the brackets (keeping the file extension), so this: title ![s2_e2].mp4 renames to this: [s2_e2]title !.mp4
The first part may contain exclamation marks and spaces, but no other characters which need to be escaped.
I have come up with this, but it only removes the filename until the first square bracket: rename -n 's/^.*\[//' *
Am I on the right path here? And how can I accomplish it with the perl rename tool on Linux?
Thanks!
|
If I understand correctly, you need to move any text inside square brackets to the beginning of the file name. Assuming you only ever have one set of square brackets in the file name, you can do:
rename -n 's/(.*)(\[.+?\])/$2$1/s' *
Running this on your example gives:
$ rename -n 's/(.*)(\[.+?\])/$2$1/s' *
title ![s2_e2].mp4 -> [s2_e2]title !.mp4
| Rename file with the rename tool - moving around square brackets |
1,409,882,382,000 |
I have one parent directory /home/test and under that directory I have multiple directories. The names are server{1..10} and one of them server3 has few files which I have copied from remote server. I tried to use cp but it's not working for me. Is there a way to copy all files or one file from server3 directory to rest of the server directories under /home/test.
|
If I'm understanding what you're after, the easiest way is a for loop:
myList="server1 server2 server4 server5 server6 server7 server8 server9 server10"
for myDir in $myList ; do cp server3/* $myDir/ ; done
| Copying all files from one directory to all directories under same parent directory |
1,409,882,382,000 |
I have a folder called movies on my Ubuntu, which contains many subfolders.
Each subfolder contains 1 mp4 file and may contain other files (jpg, srt).
Each subfolder has the same title format:
My Subfolder 1 (2001) Bla Bla
My Subfolder 2 (2000) Bla
My Subfolder 3 (1999)
How can I rename the mp4 files same as parent folder but without the year and the blabla?
For example, the mp4s inside the subfolders above become :
My Subfolder 1.mp4
My Subfolder 2.mp4
My Subfolder 3.mp4
I want the mp4s to stay in their subfolder, just their name will be changed. The year is always in parentheses.
|
Here's a bash solution:
cd movies
for mp4 in */*.mp4
do
if [[ $mp4 =~ ^(.*)\ \( ]]
then
echo mv -- "$mp4" ...to... "${mp4%%/*}/${BASH_REMATCH[1]}".mp4
fi
done
This loops over every mp4 file in every subdirectory of "movies" and applies a pattern-matching test to it. If it matches:
^ - from the beginning
(.*) - capture any number of characters and save them off
\ \( - followed by a space and an open parenthesis
If that match succeeds, then we've found an mp4 file in a directory that has the pattern you're expecting. Bash saves the parenthesized matches in the $BASH_REMATCH array variable, so we (would) call mv with the original filename and a pieced-together new name:
${mp4%%/*} is the original directory name
// - directory separator
${BASH_REMATCH[1]}".mp4 - the saved portion from above, suffixed with .mp4
If the results look correct, remove the echo and ...to... portions.
| How to rename files with same name as parent folder |
1,409,882,382,000 |
It's surely easy to do while using a graphical environment, but when I'm using the shell I have no idea on how to do that, I already tried to use copy, move, delete, and I discovered that these word are not existent commands on the shell.
|
These are:
Copy: cp file_name <directory|file_name>
Move: mv file_name <directory|file_name>
Delete: rm file_name
Visit their man pages for more information.
| How to copy/move/delete files from the shell? |
1,409,882,382,000 |
I have a number of files in a folder on a Linux machine with the following names:
11, 12, 13, 14, 15, 21, 22, 23, 24, 25, 31, 32, 33, 34, 35
I would like to use regex in order to rename with the .inp extension
I tried
mv * *.inp
mv: target '*.inp' is not a directory
which provided an error. I also tried using the regex [123][12345] instead of the *.
So, I understand that mv is used to move files around. I also got the idea that perhaps I could use ./*.inp to force mv to write in the same folder but it failed. So, apart from not understading correctly how mv works, how would I proceed to have have this done with mv?
|
The issue with your command is that the mv command only can move/rename a single file (when given exactly two command line arguments), or move a bunch of files to a single destination directory (more than two command line arguments).
In your case, you use the expansion of * *.inp as the arguments, and this is going to expand to all the visible filenames in the current directory, followed by the names that matches *.inp. Assuming that this expands to more than two names, then the last argument needs to be the name of a directory for the command to be a valid mv command, or you'll get a "is not a directory" error.
In this case, we instead want to use mv with two arguments at a time, and for that we need to use a shell loop:
for name in [123][1-5]; do
mv "$name" "$name.inp"
done
This loops over all names that matches (a variant of) the filename globbing pattern that you mentioned (note, this is not a regular expression). In the loop body, the current name will be stored in the name variable, and the mv simply renames the file by adding .inp at the end of the name.
This does not prevent mv from overwriting existing files in the case where there might be a name collision. For that, assuming you use GNU mv, you may want to use mv with its --no-clobber (or -n) option, or possibly with its --backup (or -b) option.
Or, you could do an explicit check for the existence of the destination name and skip the current file if it exists (which would also avoid moving files into exiting directories if you happened to have a directory with the same name as the destination name):
for name in [123][1-5]; do
[ -e "$name.inp" ] || [ -L "$name.inp" ] && continue
mv "$name" "$name.inp"
done
Using GNU mv with --no-target-directory (or -T) in combination with either -n or -b would avoid overwriting existing files (or back them up, with -b) and also avoid moving the files into a subdirectory that happened to have the same name as the destination name.
| Renaming files using regex |
1,409,882,382,000 |
I am trying to come up with a bash script to remove parts of a file name on CentOS. My file names are:
119903_Relinked_new_3075_FileNote_07_02_2009_JHughes_loaMeetingAndSupport_FN_205.doc
119904_relinked_new_2206_Support_Intensity_Scale_SYCH_SIS_264549.pdf
119905_relinked_new_3075_Consent_07_06_2009_DSweet_CRFA_CF_16532.docx
29908_relinked_new_2206_Assessor_Summary_Report_SERT_I_OTH_264551.pdf
009712_relinked_new_3075_Consent_07_06_2009_CWell_DPRT_check_CF_16535.pdf
I would like to remove 119903_Relinked_new_ from the file names. The end result should be:
3075_FileNote_07_02_2009_JHughes_loaMeetingAndSupport_FN_205.doc
2206_Support_Intensity_Scale_SYCH_SIS_264549.pdf
3075_Consent_07_06_2009_DSweet_CRFA_CF_16532.docx
2206_Assessor_Summary_Report_SERT_I_OTH_264551.pdf
3075_Consent_07_06_2009_CWell_DPRT_check_CF_16535.pdf
I have been trying multiple scripts but coming up short. The number before _Relinked_new_ is different in most cases and the file extensions vary across .pdf, .docx, .doc etc. Any help would be appreciated.
|
Using the prename(1) tool (it might be called rename or prename or perl-rename depending on your system):
rename 's/[0-9]+_[rR]elinked_new_//' /path/to/dir/*
This will use a regular expression to match the pattern and replace it with nothing on the specified files.
| Script to remove specific strings in file name |
1,409,882,382,000 |
I'm following a tutorial on how to install firmware on OpenBSD. The tutorial has me creating a new msdos file system on the usb with: newfs_msdos -F 32 /dev/rsd2c then to take usb to a system with an internet connection, then move the firmware tarball into the USB. I have never moved data to a msdos fs via the command line before. The tutorial shows him using dolphin on a manajaro install, however I do not have any systems with gui's installed.
How can I move the tarball to the usb drive?
I've tried mounting it them moving to the mounted directory but it does not work.
Stating failed to preserve ownership for '/mnt2/iwn-firmwae.tgz': Operation not permitted
Here's a link to the tutorial: https://www.youtube.com/watch?v=kUrUq2qfWiY
|
The message
failed to preserve ownership for '/mnt2/iwn-firmwae.tgz': Operation not permitted
is more of a warning than an error. The files copied successfully, but permissions and ownership of the files were not copied.
Most likely this is a DOS filesystem which does not support unix ownership and permissions. For the purposes you describe, permissions and ownership are not important, so you can safely ignore this message as a warning.
| Error: "failed to preserve ownership" when trying to move files to a FAT32 partition on OpenBSD |
1,409,882,382,000 |
I have a directory that has files in it that are named 'o1.ray' to 'o293.ray'. I want to move them to an another directory while renaming them 'o132.ray' to 'o424.ray'.
How can I do this in the terminal?
cd directory
for i in {1...293};
do cp o$i.ray subdirectory/o$i+131.ray;
done
I know this is wrong because I get:
error message 'cp: cannot stat 'o{1...293}.ray': No such file or directory
|
As Kusalananda hinted at, this is mostly an adjustment of syntax:
for index in {1..293}; do echo mv o"${index}".ray subdirectory/o"$((index+131))".ray; done
Remove the echo when the output looks correct.
Or, with zsh's zmv module:
autoload zmv
zmv -n 'o(<->).ray' 'subdirectory/o$[$1 + 131].ray'
The $[ ... ] syntax performs arithmetic; $1 is captured with the parenthesis around the <-> that follows the o. The <-> is a zsh wildcard that captures numeric ranges; without any endpoints, it's open-ended; for your case, you could be very specific with:
zmv -n 'o(<1-293>).ray' 'subdirectory/o$[$1 + 131].ray'
Remove the -n when the output looks correct.
| How to rename files with sequential names to an another sequence using Terminal? |
1,409,882,382,000 |
Suppose I have same version of linux kernel but I changed some driver lines. Is there any way to compare these kernels and list the results. The result would be helpful to go back if I changed a lot in original drivers.
|
The traditional method:
diff -r dir1 dir2
That gives you a file-by-file difference, which can be kind of wordy. If you have Gnu diff,
you can try:
diff -r --brief dir1 dir2
| Compare two similar directories and list differences between files |
1,409,882,382,000 |
I had about 170 MiB free space in my Raspberry Pi so I decided to delete the .deb packages (about 800 MiB) in /var/cache/apt.
I opened sudo pcmanfm, selected the files and pressed the delete button (not shift+delete). The files were gone, but the free space did not increase.
The pcmanfm window with elevated privileges says that recycle bin is not supported, and the files did not go to ordinary recycle bin.
If the free space did not increase, then where did the files go, and how to delete them properly?
I am using latest version of Raspbian.
Edit:
The deleted files were in /root/.local/share/Trash/files/. They were found following the accepted answer.
|
The following command will print all debian packages found on the system:
sudo find / -type f -iname "*.deb"
this will delete them:
sudo find / -type f -iname "*.deb" -exec rm -v {} \;
If the free space did not increase, then where did the files go, and
how to delete them properly?
PCManFM's trashcan location is:
~/.local/share/Trash/files it could go there.
Clear unused packages from cache:
sudo apt-get autoclean
clear all cached packages:
sudo apt-get clean
| Free space does not increase even after deleting |
1,409,882,382,000 |
I have a nautilus script that generates an archive file based on the files selected in the nautilus window. This archive file is created in the /tmp directory. I want a way to copy this file to the clipboard from the script, so that the user can just go to desktop or home directory and paste it.
I have tried doing this with xclip and xsel, but they don't seem to replicate a file copying operation, rather they copy the contents of a file.
xclip -in -selection c generated-archive
echo -n generated-archive | xsel --clipboard --input
Neither of them do what I need.
So, I want to know if this is possible, and if it is, how should I go about it?
Thanks.
|
It seems that Nautilus keeps track of it's internal state with respect to changes to the clipboard, which means that any change of state to the clipboard (including replacement with an identical filepath string) automatically cancels the paste pending state, hence nothing happens when an externally loaded clipboard contains a valid filepath...
Nautilus only recognizes a file copy/cut which has been initiated from within Nautilus itself.
This is exactly what you have observed.. with perhap some explanation as to why... I noticed in the Nautilus source 'cut-n-paste-code' that it contains a lot about about saved states.
# In Nautilus, manually "copy" a file (to the clipboard) using Ctrl+C
xsel -ob |xxd # hex-display clipboard contents of the clipboard
echo "### At this point, Nautilus **paste** works."
read # pause
xsel -ob |xsel -ib # Replace clipboard with itself
xsel -ob |xxd # hex-display clipboard contents again
echo "### At this point, Nautilus **paste** does NOT work."
After your manually copy/cut, you can perform endless actions (either in Nautilus or elswhere) and the Ctrl+V paste in Nautilus will work, but as soon as you modify the clipboard, it won't 'paste'...
| Copy a file from a nautilus-script to clipboard |
1,409,882,382,000 |
I have many files which contain a string like this:
/databis/defontis/Dossier_fasta_chrm_avec_piler/SRR6237661_chrm.fasta: N putative CRISPR arrays found
Where the N is a number that can be either 0 or greater. I need to move all files where the N is 0 to the directory Sans_crispr and all files where N is greater than 0 to the directory Avec_crispr.
I can also see with ls that all files where no CRISPR was found (those where N is 0) are smaller than 3355 bytes, so maybe that can be used.
I tried this:
find . -name "*.out" -type 'f' -size -5k -exec mv {} /databis/defontis/Dossier_fasta_chrm_avec_piler/Dossier_fasta_chrm_sortie_pilercr/Sans_Crispr/ \;
But for all my files I have this
mv: cannot move './SRR5273182_chrm.fasta.fa-pilercr.out' to '/databis/defontis/Dossier_fasta_chrm_avec_piler/Dossier_fasta_chrm_sortie_pilercr/Sans-Crispr/': Not a directory
I tried some for f in ...do done or if then fi. I tried with grep for the pattern ' 0 putative CRISPR arrays found'
But none of them worked, always an error or I didn't find what I want.
This is an example of my files:
And this is the contents:
With Crispr
Help on reading this report
===========================
This report has three sections: Detailed, Summary by Similarity and Summary by Position.
The detailed section shows each repeat in each putative CRISPR array.
The summary sections give one line for each array.
An 'array' is a contiguous sequence of CRISPR repeats looking like this:
REPEAT Spacer REPEAT Spacer REPEAT ... Spacer REPEAT
Within one array, repeats have high similarity and spacers are, roughly speaking, unique within a window around the array. In a given array, each repeat has a similar length, and each spacer has a similar length. With default parameters, the algorithm allows a fair amount of variability in order to maximize sensitivity. This may allow identification of inactive ("fossil") arrays, and may in rare cases also induce false positives due to other classes of repeats such as microsatellites, LTRs and arrays of RNA genes.
Columns in the detailed section are:
Pos Sequence position, starting at 1 for the first base. Repeat Length of the repeat. %id Identity with the consensus sequence. Spacer Length of spacer to the right of this repeat. Left flank 10 bases to the left of this repeat. Repeat Sequence of this repeat.
Dots indicate positions where this repeat
agrees with the consensus sequence below. Spacer Sequence of spacer to the right of this repeat,
or 10 bases if this is the last repeat.
The left flank sequence duplicates the end of the spacer for the preceding repeat; it is provided to facilitate visual identification of cases where the algorithm does not correctly identify repeat endpoints.
At the end of each array there is a sub-heading that gives the average repeat length, average spacer length and consensus sequence.
Columns in the summary sections are:
Array Number 1, 2 ... referring back to the detailed report. Sequence FASTA label of the sequence. May be truncated. From Start position of array. To End position of array. # copies Number of repeats in the array. Repeat Average repeat length. Spacer Average spacer length. + +/-, indicating orientation relative to first array in group. Distance Distance from previous array. Consensus Consensus sequence.
In the Summary by Similarity section, arrays are grouped by similarity of their consensus sequences. If consensus sequences are sufficiently similar, they are aligned to each other to indicate probable relationships between arrays.
In the Summary by Position section, arrays are sorted by position within the input sequence file.
The Distance column facilitates identification of cases where a single array has been reported as two adjacent arrays. In such a case, (a) the consensus sequences will be similar or identical, and (b) the distance will be approximately a small multiple of the repeat length + spacer length.
Use the -noinfo option to turn off this help. Use the -help option to get a list of command line options.
pilercr v1.06 By Robert C. Edgar
/databis/defontis/Dossier_fasta_chrm_avec_piler/SRR2177954_chrm.fasta: 1 putative CRISPR arrays found.
DETAIL REPORT
Array 1
>SRR2177954.k141_500270 flag=1 multi=9.2309 len=7453
Pos Repeat %id Spacer Left flank Repeat Spacer
========== ====== ====== ====== ========== ==================================== ======
66 36 100.0 25 CAGAAGTATT .................................... CTCACACACGCTGATGCAGACAACA
127 36 100.0 26 GCAGACAACA .................................... GCGAGAGCAGGGATTTGGAACGTAAT
189 36 100.0 26 GGAACGTAAT .................................... ATGTTGATGGAAAAACTCCCACAGAC
251 36 100.0 TCCCACAGAC .................................... ACTGAATGTG
========== ====== ====== ====== ========== ====================================
4 36 25 ATCTACAAAAGTAGAAATTTTATAGAGGTATTTGGC
SUMMARY BY SIMILARITY
Array Sequence Position Length # Copies Repeat Spacer + Consensus
===== ================ ========== ========== ======== ====== ====== = =========
1 SRR2177954.k141_ 66 221 4 36 25 + ATCTACAAAAGTAGAAATTTTATAGAGGTATTTGGC
SUMMARY BY POSITION
>SRR2177954.k141_500270 flag=1 multi=9.2309 len=7453
Array Sequence Position Length # Copies Repeat Spacer Distance Consensus
===== ================ ========== ========== ======== ====== ====== ========== =========
1 SRR2177954.k141_ 66 221 4 36 25 ATCTACAAAAGTAGAAATTTTATAGAGGTATTTGGC
Without Crispr
Help on reading this report
===========================
This report has three sections: Detailed, Summary by Similarity
and Summary by Position.
The detailed section shows each repeat in each putative
CRISPR array.
The summary sections give one line for each array.
An 'array' is a contiguous sequence of CRISPR repeats
looking like this:
REPEAT Spacer REPEAT Spacer REPEAT ... Spacer REPEAT
Within one array, repeats have high similarity and spacers
are, roughly speaking, unique within a window around the array.
In a given array, each repeat has a similar length, and each
spacer has a similar length. With default parameters, the
algorithm allows a fair amount of variability in order to
maximize sensitivity. This may allow identification of
inactive ("fossil") arrays, and may in rare cases also
induce false positives due to other classes of repeats
such as microsatellites, LTRs and arrays of RNA genes.
Columns in the detailed section are:
Pos Sequence position, starting at 1 for the first base.
Repeat Length of the repeat.
%id Identity with the consensus sequence.
Spacer Length of spacer to the right of this repeat.
Left flank 10 bases to the left of this repeat.
Repeat Sequence of this repeat.
Dots indicate positions where this repeat
agrees with the consensus sequence below.
Spacer Sequence of spacer to the right of this repeat,
or 10 bases if this is the last repeat.
The left flank sequence duplicates the end of the spacer for the preceding
repeat; it is provided to facilitate visual identification of cases
where the algorithm does not correctly identify repeat endpoints.
At the end of each array there is a sub-heading that gives the average
repeat length, average spacer length and consensus sequence.
Columns in the summary sections are:
Array Number 1, 2 ... referring back to the detailed report.
Sequence FASTA label of the sequence. May be truncated.
From Start position of array.
To End position of array.
# copies Number of repeats in the array.
Repeat Average repeat length.
Spacer Average spacer length.
+ +/-, indicating orientation relative to first array in group.
Distance Distance from previous array.
Consensus Consensus sequence.
In the Summary by Similarity section, arrays are grouped by similarity of their
consensus sequences. If consensus sequences are sufficiently similar, they are
aligned to each other to indicate probable relationships between arrays.
In the Summary by Position section, arrays are sorted by position within the
input sequence file.
The Distance column facilitates identification of cases where a single
array has been reported as two adjacent arrays. In such a case, (a) the
consensus sequences will be similar or identical, and (b) the distance
will be approximately a small multiple of the repeat length + spacer length.
Use the -noinfo option to turn off this help.
Use the -help option to get a list of command line options.
pilercr v1.06
By Robert C. Edgar
/databis/defontis/Dossier_fasta_chrm_avec_piler/ERR1544006_chrm.fasta: 0 putative CRISPR arrays found.
Thanks your for your time
|
Simply iterate over the files, and grep for : 0 putative CRISPR regions. If the grep finds a match, move the file:
mkdir -p Sans_crispr Avec_crispr
for file in *pilercr.out; do
if grep -q ': 0 putative CRISPR arrays' "$file"; then
mv "$file" Sans_crispr
else
mv "$file" Avec_crispr
fi
done
The -q flag to grep tells it not to print any output, but it will still exit with a failed status if no match is found and with success if a match is found. So here we use that to move the files to the appropriate folder.
The reason you were getting this error:
mv: cannot move './SRR5273182_chrm.fasta.fa-pilercr.out' to '/databis/defontis/Dossier_fasta_chrm_avec_piler/Dossier_fasta_chrm_sortie_pilercr/Sans-Crispr/': Not a directory
Is because the directory /databis/defontis/Dossier_fasta_chrm_avec_piler/Dossier_fasta_chrm_sortie_pilercr/Sans-Crispr/ doesn't exist. This is why the first command in the little script above is mkdir -p Sans_crispr Avec_crispr which means "create the directories Sans_crispr and Avec_crispr unless if they don't already exist".
| How can I move files to different directories depending on their contents? |
1,409,882,382,000 |
I want to download .jpg/.png/.tiff files into my ~/Pictures/ folder, .mkv/.avi/.mp4 in my ~/Videos folder etc.
Is there anyway to do this?
the only solution I could come up with was using different aliases like :
alias vwget="wget -P ~/Videos/"
I am using Linux Mint.
|
I would just do this in two steps:
Download everything into one directory, preferably on the same file system as your ~/Pictures and ~/Videos directories so you don't need to use any more space.
Once they have downloaded, cd into the download directory and run these commands:
mv *.jpg *.png *.tiff ~/Pictures
mv *mkv *avi *mp4 ~/Videos
You might be able to make some complex script that detects the extension and issues a different wget command depending on it, but is it really worth it?
| Using wget to download files to specific folders based on file extension |
1,586,785,493,000 |
How can I delete many folders that have more than one - in their names?
For example:
e97bf913-5759-4fff-bdaf-2f931b53a432/
39f953c5-dab0-420e-a650-a50a30f48097/
|
The pattern
*-*-*/
matches directories with two or more hyphens. The * matches any string (zero or more characters).
If you want to only match directory names that should not start and end with a hyphen (as in your example), you could use
[!-]*-*-*[!-]/
instead. The [!-] matches any character that is not (!) a hyphen.
Run
ls -d [!-]*-*-*[!-]/
first to see if these are the ones you want to delete. Then run
rm -r [!-]*-*-*[!-]/
to delete them recursively. If you should really need to force the deletion, add -f to the command.
| Delete set of folders contain more than one '-' in different places as a part of their name |
1,586,785,493,000 |
I'd like to rename the recon.text file with its directory name. I have 1000 directories. Some help, please?
7_S4_R1_001_tri10_sha/recon.text
8_S1_R1_001_tri15_sha/recon.text
9_S8_R1_001_tri20_sha/recon.text
10_S5_R1_001_tri25_sha/recon.text
11_S3_R1_001_tri30_sha/recon.text
|
With rename:
rename -n 's!(.*sha)/recon\.text!$1/$1.txt!' */recon.text
Remove -n switch when the output looks good to rename for real.
man rename
There are other tools with the same name which may or may not be able to do this, so be careful.
The rename command that is part of the util-linux package, won't.
If you run the following command (GNU)
$ file "$(readlink -f "$(type -p rename)")"
and you have a result that contains Perl script, ASCII text executable and not containing ELF, then this seems to be the right tool =)
If not, to make it the default (usually already the case) on Debian and derivative like Ubuntu :
$ sudo apt install rename
$ sudo update-alternatives --set rename /usr/bin/file-rename
For RedHat-family distros:
yum install prename
The 'prename' package is in the EPEL repository.
For archlinux:
pacman -S perl-rename
For *BSD:
pkg install gprename
or p5-File-Rename
For Mac users:
brew install rename
If you don't have this command with another distro, search your package manager to install it or do it manually (no deps...)
This tool was originally written by Larry Wall, the Perl's dad.
| How to rename files with the name of their parent directory? |
1,586,785,493,000 |
I am traversing through directories and moving old files to a different location.
For eg. a file located in path /a/b/c/d.txt I want to move to /x/a/b/c/d.txt without errors.
Is it possible in a single command? If mv doesn't work , then combination of cp and rm will work?
Only need to move single file at a time.
If dest tree exists already then it should not raise an error.
If dest tree doesn't exist then, create.
This post doesnt seem to address the requirement. Also I am able to get this done in multiple lines of code. So could there be predefined one-liner kind of option? (Want to integrate this with a programming language)
|
Possibly not what you are looking for, if you need to use standard commands only, but rsync may help:
$ mkdir a a/b a/b/c; echo foo >a/b/c/d.txt; tree a
a
└── b
└── c
└── d.txt
2 directories, 1 file
$ rsync --relative --remove-source-files a/b/c/d.txt x/
$ tree a x
a
└── b
└── c
x
└── a
└── b
└── c
└── d.txt
5 directories, 1 file
From the manual:
--relative, -R
Use relative paths. This means that the full path names specified on the command line are sent to the server rather than just the last parts of the filenames. ...
and
--remove-source-files
This tells rsync to remove from the sending side the files (meaning non-directories) that are a part of the transfer and have been successfully duplicated on the receiving side.
| move one file from /a/b/c/d.txt to /x/a/b/c/d.txt , Create full tree if not exist [duplicate] |
1,586,785,493,000 |
I have a directory that has subdirectories in it and I need to get the names of some of these subdirectories but there is one with a similar pattern as them that I want to ignore.
For example, if the list of Subdirectories is like this:
Name_One
Not_needed1
Name_Two
Name_Three
Not_needed2
Name_Zero
I want to store the names of the subdirectories Name_One, Name_Two, and Name_Three in variables but I want to ignore Name_0.
Also, the naming changes with each project but always follows the pattern of Name_Number and Name_Zero is always the one that needs to be excluded.
I am using Mac OS.
|
try:
find . -type d -name 'Name_*' ! -name 'Name_Zero'
for the directories name of Name_[digit] and exclude only directory Name_0.
find . -type d ! -name 'Name_0' -regex './Name_[0-9]*'
in case you had subdirectors too, use this:
find . -type d ! -name 'Name_0' -regex '.*/Name_[0-9]*'
the regex above matches the directories' name where it matches pattern Name_[zero-or-more-digits], to avoid to match on Name_, use '.*/Name_[0-9][0-9]*' instead.
| Find a file by pattern but ignore others with similar patterns |
1,586,785,493,000 |
I have a symlink to a file on my Ubuntu system, and I need to copy the original file to a different directory and have a new name there. I am able to copy it to a different directory using
readlink -ne my_symlink | xargs -0 cp -t /tmp/
But I am not able to give a new name in the destination directory.
Basically, I am looking for a command that could look like:
readlink -ne base.txt | xargs -0 cp -t /tmp/newnametofile
When I try the exact same command above, it gives me file or directory not found error.
Anyway to achieve this?
|
cp will dereference symlinks with -L option.
This should work:
cp -L my_symlink /tmp/newnametofile
Regarding your xargs, -t, --target-directory option of cp only takes DIRECTORY as input. You could make it work using xargs -I{} cp {} /tmp/newnametofile (but I'd use cp -L anyways...
| copying a symlink to a target file using cp -t |
1,586,785,493,000 |
I have a series of files located in a series of folder, for example:
~/BR2_1-3/bin.1.permissive.tsv
~/BR2_1-3/bin.2.permissive.tsv
~/BR2_1-3/bin.3.orig.tsv
~/BR2_2-4/bin.1.strict.tsv
~/BR2_2-4/bin.2.orig.tsv
~/BR2_2-4/bin.3.permissive.tsv
~/BR2_2-4/bin.4.permissive.tsv
~/BR2_3-5/bin.1.permissive.tsv
~/BR2_3-5/bin.2.permissive.tsv
~/BR2_3-5/bin.3.orig.tsv
~/BR2_3-5/bin.4.orig.tsv
~/BR2_3-5/bin.5.permissive.tsv
...
What I want to do is to extract the 1st and 5th column from each of the *.tsv files and create a new tab delimited file in the corresponding folder. That I can do separately for each file under its corresponding folder by using the commands below:
$ awk -F '\t' 'OFS="\t" {if ($5 != "") print($1,$5)}' bin.1.permissive.tsv > test
$ sed -i '1d' test
$ mv test BR2_1-bin.1.permissive.ec
My question is, because I have over a hundred of this kind of file, is there a way to write a for loop to do this step at the terminal automatically?
The naming convention for the folder and files are as follows:
"BR(2~5)_(1~6)-(n, as the number of files contained in the folder)" for the folders;
"bin.n.(strict/permissive/orig).tsv" for the files.
One input file should be mapping to one output file. The name for an output files is "BR2_1-bin.1.permissive.ec" if the corresponding input file was "~/BR2_1-3/bin.1.permissive.tsv". And the name for an output file is "BR2_3-bin.3.orig.ec" if the corresponding input file was "~/BR2_3-5/bin.3.orig.tsv". In addition, the output file is supposed to be written in the same folder with its corresponding input file.
Thanks for this question from the comment.
Thank you in advance and all suggestions are welcomed!
|
find and xargs are typically recommended for this:
find "$HOME" -name \*.tsv |
xargs awk -F'\t' -v OFS='\t' '$5 != "" {print $1, $5}' >> output.tsv
or, more safely
find "$HOME" -name \*.tsv -print0 |
xargs -0 awk -F'\t' -v OFS='\t' '$5 != "" {print $1, $5}' >> output.tsv
find's -print0 directive prints out the matched files separated with a null byte, and xargs's -0 options uses the null byte to separate filenames. This is done because the null byte is not allowed to appear in filenames, while newline is a valid character.
OK, for each file to be processed into the corresponding .ec file:
find "$HOME" -name \*.tsv -print0 |
xargs -0 awk -F '\t' -v OFS='\t' '
FNR == 1 {
if (ec) close(ec)
ec = gensub(/\.tsv$/, ".ec", 1, FILENAME)
next
}
$5 != "" {print $1, $5 > ec}
'
Notes:
print ... > ex -- similar to redirection in the shell, this redirects the output to the filename contained in the ec variable.
unlike the shell, this does not overwrite the file for every "print", but only the first print truncates/creates the file and all subsequent prints append to it.
You can run into "too many open files" errors, so it's best practice to close an open file when you're done with it.
do this when you're at the first record of a file
if the ec variable is not empty, it holds a filename that was used for the previous file that was processed
gensub is a gawk-specific function, similar to sub and gsub. it's described in the manual
unlike sub and gsub, gensub returns the transformed value.
| Use For loop to extract certain columns from a series of files to write new tab-delimited files |
1,586,785,493,000 |
I've been testing this on a simple directory structure.
I'm trying to change any directory and/or sub directory with the name "Season" to "Sn".
I got to a point where the script would change what I wanted...except for the top directory, as seen in the list below - "Season 1" "Season 2" "Season 3".
Directory structure:
.
├── AnotherShow
│ ├── Sn1
│ ├── Sn2
│ ├── Sn3
│ └── Sn4
├── Movie1
│ ├── Sn1
│ └── Sn2
├── Movie2
│ ├── Sn1
│ ├── Sn2
│ ├── Sn3
│ └── Sn4
├── Movie3
│ ├── Sn1
│ └── Sn3
├── Movie4
│ ├── Sn2
│ └── Sn3
├── Season 1
├── Season 2
├── Season 3
├── Show
│ ├── Sn1
│ ├── Sn2
│ ├── Sn3
│ ├── Sn4
│ └── Sn5
└── TV
├── Sn1
├── Sn2
├── Sn3
└── Sn4
My script is:
array="$(find . -maxdepth 2 -type d -iname 'Season*' -print)"; # A more refined way to search for Seasons in a directory.
for dir in "${array[@]}"; do # Put this list into an array. Surround the array with quotes to keep all space s (if any) together.
new="$(echo "$dir" | sed -e 's/Season 0/Sn/' -e 's/Season /Sn/')"; # Only change Season 0 to SN. Leave othe rs alone.
sudo mv -v "$dir" "$new" && echo Changed "$dir" to "$new";
done
|
The issue is that you are not creating an array, you are creating a string. You can test this quite easily by attempting to print the first element of your array:
$ array="$(find . -maxdepth 2 -type d -iname 'Season*' -print)";
$ echo $array
./Season 3 ./Season 1 ./Season 2
Looks good right? But if that were an array, you would be able to print each element separately. Unfortunately, the above is a simple string1 and not an array:
$ echo ${array[0]}
./Season 3 ./Season 1 ./Season 2
$ echo ${array[1]}
The array actually consists of a single string which is why the second element of the "array" (${array[1]}) is empty. In any case, you don't need an array for this, just read through the output of find (you also don't need the -print, that's what it does by default). A working version of your script could be:
#!/usr/bin/env bash
find . -maxdepth 2 -type d -iname 'Season*' | sort -r |
while IFS= read -r dir
do
mv "$dir" ${dir/Season /Sn} && echo Changed "$dir" to "$dir/Season /Sn}";
done
There are a couple of tricks used there. First, the while loop to go through the results of find. This is the same basic principle as the for loop that you used but combined with read can read from standard input.
The IFS= is needed to avoid splitting on whitespace (without it, the directory Season 1 would be split into Season and 1).
The -r of read tells it to not allow backslashes to escape characters (things like \t and \n are treated literally) which makes sure this can deal with even the strangest names.
The sort -r is needed so that if you have a directory matching the pattern inside another matching directory, the child is listed before the parent. For example:
./Season 12/Season 13
./Season 12
This ensures that you run mv "Season 12" "Sn12" after running mv "Season 12"/"Season 13" "Season 12"/"Sn13". If you don't do that, the second command will fail since Season 12 no longer exists.
Finally, I removed the sudo. You should instead run the script as sudo script.sh. It is never a good idea to have random sudo calls in your scripts.
1I'm not sure about the details but apparently, bash treats string variables as arrays of one element.
| Renaming directory bash script - target ... No such file or directory |
1,586,785,493,000 |
Where to begin to diagnose the problem?
I don't even know where to begin to look. I'm practicing string and file wrangling, and I cannot get sed '$d' file.sh to delete the last line of a file; furthermore, I tried using sudo (also this did not work).
My source says my command should work.
I know that you can use things like cat, tail, and head -n -1 to print, and I also read that I should be able to use these to delete, but the only reliable source I've found that talks about literally removing lines is here: https://linuxhint.com/sed-command-to-delete-a-line/ and it says to use the command shown above: it doesn't work.
Best guess?
I've noticed that the command succeeds with -i to delete the last line of the file. Why does -i work and the lack of -i not work? What is happening behind the scenes when I do not include -i, and will it destroy my harddrive?
|
sed (a command from the 70s) is a stream editor, it's not meant to edit files. See ed, ex, vi for file text editors.
sed takes a stream on input either stdin or the contents of files if some are passed as arguments, processes them on the fly applying the commands in the sed script for each line matching the address(es) and prints the result of that filtering on standard output.
So sed '$d' prints all the input except the last line ($) which is discarded / deleted.
You can save that output to a file by adding > newfile or pipe it along to another command like | tr -s '[:blank:]' ' ' to squeeze blanks.
perl (from the mid-80s) has a -p option that makes it work like sed. perl also added a -i[.back] (for in-place) option that changes the behaviour so that instead of writing the result on stdout, it writes it into a new copy of the file, optionally keeping the original copy with a .back suffix.
Both The GNU and FreeBSD implementations of sed added a similar feature in the early 2000s though with different syntax for when no backup is required: sed -i '$d' file with GNU's and sed -i '' '$d' file with FreeBSD's. sed -i.back '$d' works in both, sed -i .back '$d' only in FreeBSD's.
Several other sed implementations have added a -i option since, most of them the GNU way, but not all and in any case, that's not a standard option.
sed is not the only stream editor application. cat, awk, paste, tr, nroff, eq, sort, join, cut... work the same way and take input and produce output. The idea is that you pipe them together to achieve some task, one of the strength of Unix shell scripting. Few others of those can edit a file in place. The GNU implementation of awk aka gawk has a -i in-place option. sort can be called with sort -o file file (it can because it needs to read the whole input before starting to write the output, there's no backup in that case).
| Why is `sed '$d' file` not deleting lastlines? |
1,586,785,493,000 |
Is there a means of using rsync to sync two folders?
I would like to shoot photos in JPG and RAW. Since viewing JPGs is much quicker than opening up RAWs, I'd like to do culling in the JPG folder and then sync it to the RAW folder. However they contain different extensions but the same file name.
I realize that the most likely solution is to create a sh to accomplish this and then give it an alias.
|
in general, if you have similar filename that changes in some particular place, you can set those differences in [] brackets:
ruslanas@ruslanas test]$ ls
a.jpg a.raw a.doc
[ruslanas@ruslanas test]$ ls a.{jpg,raw}
a.jpg a.raw
[ruslanas@ruslanas test]$
or just use asterisk if you have only jpg and raw extension and only those 2 extensions.
also, if you would like to use a singleliner to extract jpg from raw and then move raw to raw dir:
cd /path/to/fresh-photos && ls *raw | while read line ; do culling $line -output /path/to/processed/jpg/$(echo $line | sed 's/raw$/jpg/g') && mv $line /path/to/processed/raw ; done
just replace culling with your cooler and should be good to go!
Also disadvantage of previous line, that you have to have temp dir for new files... Also there is a simple way to make it only do culling to non processed jpg's:
RAW=/path/to/raw && JPG=/path/to/jpg && find $RAW $JPG -type f \( -name '*.raw' -o -name '*.jpg' \) -exec basename {} \; | sed -e 's/\.jpg$//g' -e 's/\.raw$//g' | sort -n | uniq -u | while read line ; do echo "processing ${RAW}/${line}.raw" && culling ${line}.raw -output ${JPG}/$line.jpg ; done
Again, replace culling with our cuddling command.
If you need just remove photos from RAW which are not present in JPG dir:
RAW=/path/to/raw && JPG=/path/to/jpg && find $RAW $JPG -type f \( -name '*.raw' -o -name '*.jpg' \) -exec basename {} \; | sed -e 's/\.jpg$//g' -e 's/\.raw$//g' | sort -n | uniq -u | while read line ; do echo "Removing: ${RAW}/${line}.raw" && rm -f ${RAW}/${line}.raw ; done
keep in mind all extensions expected to be lowercase, if not change to what is in your side.
Before first release, run this test command:
RAW=/path/to/raw && JPG=/path/to/jpg && find $RAW $JPG -type f \( -name '*.raw' -o -name '*.jpg' \) -exec basename {} \; | sed -e 's/\.jpg$//g' -e 's/\.raw$//g' | sort -n | uniq -u | while read line ; do echo "Removing: ${RAW}/${line}.raw" && echo rm -f ${RAW}/${line}.raw ; done
Also, if you will be putting it into script.bash file, then replace && and ; with enters/new lines. For more readable text.
| How to sync two folders with the same file names but different extensions |
1,586,785,493,000 |
Accessing these locations from Windows times out or. in worse case, hangs until restarting smbd. Accessing them via SSH hangs until the connection is closed. I forcefully checked the RAID5 array where the troublesome directories are but it didn't find anything besides a lot of 'extent tree could be narrower' (which as I read is not serious). 2,5T (73%) of the array is used but I don't think it would be the issue. According to /proc/loadavg, the average system load is 6.63 7.17 6.90 which I don't think would be that big. There were no problems either until about two-three weeks ago.
I've found this in dmesg
[70920.276372] sd 8:0:0:0: [sdf] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[70920.276382] sd 8:0:0:0: [sdf] tag#0 Sense Key : Medium Error [current]
[70920.276387] sd 8:0:0:0: [sdf] tag#0 Add. Sense: Unrecovered read error
[70920.276395] sd 8:0:0:0: [sdf] tag#0 CDB: Read(10) 28 00 3f 14 00 f0 00 00 f0 00
[70920.276400] print_req_error: critical medium error, dev sdf, sector 1058275568
How can I find out what's causing the issue and how to fix it?
|
According to dmesg, the problem was caused by one of the member drives beginning to fail (sector fault). After removing the failing drive, the array is working normally again.
| Accessing certain locations on a home server, hangs/times out. How can I find the cause and fix it? [closed] |
1,586,785,493,000 |
File names :
_10_-_Overriding_or_customizing_the_rest_end_point-rkkfgI502f0.mp4
_11_-_Expose_ids_in_json_response-CrDXtLfiZos.mp4
_12_-_Create_angular_8_project_using_Angular_CLI-kSXkW1hF0KU.mp4
_13_-_Create_a_model_class_for_Book_entity-Hfm3da1Ze8E.mp4
_14_-_Display_the_list_of_books_in_html_table_with_hard-coded_values-b5R8CsMrOO4.mp4
_15_-_Create_a_new_book-list_component_and_display_the_book_images-Tto3r229fFA.mp4
_16_-_Make_a_HTTP_GET_request_to_the_Spring_boot_application-98RfVQ9Z3ZM.mp4
_17_-_Understanding_the_Observable_and_Observer-NKLirs5SFYk.mp4
_18_-_Call_a_service_method_to_get_the_book_array-yQ34aPdH1_0.mp4
_19_-_Fix_the_error_CORS_policy_and_display_the_data_in_html_table-YSEAdODxMfE.mp4
_1_-_Course_Introduction-b4pjjftApmY.mp4
_20_-_Replace_the_blank_images_with_real_images-fut1f40FHo4.mp4
_2_-_Setup_the_development_environment-RbUGvRAUpSM.mp4
_3_-_Setup_the_MySQL_database-D3krImBhofo.mp4
_4_-_Create_repository_in_Github_and_add_it_to_Eclipse_IDE-MAkVtB_MhzI.mp4
_5_-_Create_spring_boot_project_using_spring_initializer-GsmqGxEv6rg.mp4
_6_-_Configure_application_properties_and_commit_changes_to_github-HqDZKih-Ehk.mp4
_7_-_Create_an_entity_class_for_book_table-pfxt3BeU_e0.mp4
_8_-_Create_an_entity_class_for_book_table-eg1pJJLAzAQ.mp4
_9_-_Create_rest_repositories_for_book_and_category_entity-w7vFTSCWCOM.mp4
How can I remove the single _ character from the beginning of the file names?
|
In the directory that contain those files, issue
for file in _*; do mv "$file" "${file#_}"; done
${file#exp} deletes the shortest match of the pattern exp from the beginning of file.
| Removing a single character from the beginning of file names |
1,586,785,493,000 |
This is weird:
$ ls -l 'Lana Del Rey - Blue Jeans (Remastered 2011).mp3'
-rw-rw-r-- 1 gigi gigi 4.0M Dec 11 23:06 'Lana Del Rey - Blue Jeans (Remastered 2011).mp3'
$ find . -name 'Lana Del Rey - Blue Jeans (Remastered 2011).mp3'
./Lana Del Rey - Blue Jeans (Remastered 2011).mp3
# but still in the same directory:
$ find `pwd` -name 'Lana Del Rey - Blue Jeans (Remastered 2011).mp3'
# nothing found!
# directly using the path pointed by pwd will produce the same nothing-found situation
# with pwd followed by / it works
$ find `pwd`/ -name 'Lana Del Rey - Blue Jeans (Remastered 2011).mp3'
/home/gigi/Music/Youtube_mp3/Lana Del Rey - Blue Jeans (Remastered 2011).mp3
$ pwd
/home/gigi/Music/Youtube_mp3
This happens on Ubuntu 21.10 (XUbuntu in fact).
I use no alias overlapping with find.
|
This is for essentially the same reasons as discussed here:
find does not work on symlinked path?
In particular, find does not traverse symlinks by default - and (at least assuming you are using the bash shell) neither does the builtin pwd command. You have a number of options to make the behavior with pwd the same as with . when the current directory is a symbolic link:
use the builtin pwd, but force resolution of symlinks using pwd -P
use /bin/pwd instead of pwd; on Ubuntu this will almost certainly be the GNU Coreutils implementation, which assumes -P by default
tell find to follow symlinks in its command line arguments, by adding the -H command line option.
In the last case, you could use -L in place of -H however that will follow symlinks everywhere, which may produce results different from find .
| Why existing file is not found with find with `pwd` but is found with find dot? |
1,586,785,493,000 |
How can I move everything but the last n files, from dir1 to dir2?.
I currently do this, setting the time as an approximate for n, in my case n=2 each 10 minutes.
find /dir1/ -name '*.txt*' -mmin +10 -type f -exec mv "{}" /dir2/ \;
A similar command, could work, but i am not certain, could somebody confirm how should adapt this?
ls -1tr | head -n -2 | xargs -d '\n' mv -f --
|
With zsh:
mv dir1/**/*.txt*(D.om[3,-1]) dir2/
Would move the regular files in dir1 except for the 2 most recently modified ones to dir2.
**/: any level of sub-directory.
D: include hidden files and descend into hidden dirs.
.: only regular files (no symlink, directory...), equivalent for find's -type f.
om: sort by modification time (most recent first like with ls -t).
[3,-1]: only from 3rd to last
(you can issue a zmodload zsh/files to get a builtin mv or use zargs if you run into a arg list too big issue).
POSIXly, that simply can't be done without making some assumptions on the names of the files, the number of files and the length of their paths.
GNUly (with recent versions of GNU tools for -z), you could do:
find dir1 -name '*.txt*' -type f -printf '%T@\t%p\0' |
sort -rnz | tail -zn +3 | cut -zf2- | xargs -r0 mv -t dir2
While GNU sort and xargs have had -z/-0 options for decades, the addition of -z for cut and tail is fairly recent. If you have older versions of those, you can always do:
find dir1 -type f -printf '%T@\t%p\0' | sort -rnz |
tr '\n\0' '\0\n' |
tail -n +3 | cut -f2- |
tr '\n\0' '\0\n' | xargs -r0 mv -t dir2
Note that while those solutions look for files recursively in dir1 (including in subdirectories), they won't recreate the same directory structure in dir2. That means that for instance if there were both a dir1/file.txt and dir1/subdir/file.txt, they would both end up being moved to dir2/file.txt, one overwriting the other.
| How to move everything but the last files? |
1,586,785,493,000 |
I had a 1000's of text files on a Linux machine, and each text file's name has a prefix (OG00*) and contains 9 unique IDs. I want to create one text file for each of these IDs with text file names - OG0012637_1.txt, OG0012637_2.txt, OG0012637_3.txt, OG0012637_4.txt, OG0012637_5.txt....OG0012637_9.txt
Input:
$ cat OG0012637.txt
TRINITY_DN9932_c0_g2_i1.p1
TRINITY_DN17663_c0_g1_i1.p1
TRINITY_DN6645_c0_g1_i2.p1
TRINITY_DN2462_c0_g1_i2.p1
TRINITY_DN19713_c3_g1_i2.p1
TRINITY_DN4587_c0_g1_i1.p1
TRINITY_DN4405_c0_g1_i1.p1
TRINITY_DN7191_c1_g2_i1.p1
TRINITY_DN1740_c0_g1_i2.p1
Desired output files:
$ cat OG0012637_1.txt
TRINITY_DN9932_c0_g2_i1.p1
$ cat OG0012637_2.txt
TRINITY_DN17663_c0_g1_i1.p1
$ cat OG0012637_3.txt
TRINITY_DN6645_c0_g1_i2.p1
$ cat OG0012637_4.txt
TRINITY_DN2462_c0_g1_i2.p1
$ cat OG0012637_5.txt
TRINITY_DN19713_c3_g1_i2.p1
$ cat OG0012637_6.txt
TRINITY_DN4587_c0_g1_i1.p1
$ cat OG0012637_7.txt
TRINITY_DN4405_c0_g1_i1.p1
$ cat OG0012637_8.txt
TRINITY_DN7191_c1_g2_i1.p1
$ cat OG0012637_9.txt
TRINITY_DN1740_c0_g1_i2.p1
|
If you don't have access to the GNU implementation of split, then with awk:
awk '
FNR==1 {
basename = substr(FILENAME,1,length(FILENAME)-4)
}
{
outfile = basename "_" FNR ".txt"; print > outfile; close(outfile)
}
' OG*.txt
| How do I split and name the text file (based on the no. of lines of content) for bigdata? |
1,586,785,493,000 |
I'm new to Linux and trying multiple variants of Ubuntu (standard, Mint, Pop, etc.). Unfortunately, every OS is isolated on different partitions, with separate settings, user groups, etc. and programs have to be installed each time I install a new OS. I would like to have a primary OS (Ubuntu LTS) and then all subsequent OS's refer to the primary OS for user profiles, program installations, etc. - Is this possible?
My purpose is twofold: 1) ease of trying new distros without hassling with setup/maintenance of multiple profiles and programs, and 2) save on disk space by reducing duplicate files.
I know how to access files and mount folders between each distro's partition, but is there a way to trick the OS into thinking the primary partition is where it should be looking for everything?
I don't mind trying things that are experimental, as this is a new system and I have no critical data on it yet.
|
Is this possible?
No. You can share user settings easily buy creating a separate partition for /home and mounting it in all your used OSes. And if you have different /homes you can use symlinks.
however, sharing programs doesn't make any sense whatsoever (different distros may use different versions of applications, so in certain cases configurations files may be incompatible), besides most users never touch anything in /etc, so this advice holds.
| Can multiple operating systems share profiles and programs? |
1,586,785,493,000 |
I'm using KDE Plasma, and frequently need to package file sets into tar.gz or zip. This usually requires opening Ark, creating a new file, and working from there, which is a run-around. It would be better if I could right-click in the Dolphin workspace for a directory, go the "Create New..." submenu, and pick this file type out; but as of the moment, the only options are:
Folder...
Text File...
LibreOffice Calc/Draw/Impress/Writer...
Link to Location/File/Directory/Application...
This feels very much like something I should be able to customize, but I unfortunately have no idea where to look for the option. Does anyone know how I can add additional file types to this menu?
|
It's apparently necessary to add a .desktop file to /usr/share/templates or ~/.kde4/share/templates. In the latter case, the templates folder should be created if necessary. See Adding an entry to the Create New menu in the KDE UserBase Wiki for more info.
| How can I add custom entries to "Create New" on KDE Plasma? |
1,586,785,493,000 |
I have a folder with ~6000 files (some of the are .txt and some .pdf files) and I am trying to organize them in different folders. folder looks like this:
$ ls ./res-defaults
ML3020T1--ML3020N_chr6-209980-34769899-LOH_clusters.pdf
ML3020T1--ML3020N_chrom_clust_freqs.txt
ML3020T1--ML3020N_cluster_summary.txt
ML3020T1--ML3020N_mol_time_estimate.pdf
HTMCP-01-01-00451-01A-01D--HTMCP-01-01-00451-11B-01D_boots.txt
....
I have then another file which is a metadata file
$ head meta.data
bam TRUE 81-52884 81-52884T tumour grch37 genome A01423 DL_M
bam TRUE 06-30342 ML3020T1 tumour grch37 genome A43002 ML_K
bam TRUE 10-24757 10-24757T tumour grch37 genome A61218 CL_GC
bam TRUE HTMCP-01-01-00451 HTMCP-01-01-00451-01A-01D tumour grch37 genome A71785 DL_HTMCP
....
The strings "before" the "--" in file names in the res-defaults folder matche with the column 4 in the metadata file.
I want create folders according to the column 9 in the metadata and move files in the res-default to the directory that column 4 in meta data matche with characters before "--".
I am expecting outputs like this
$ ls ./ML_K
ML3020T1--ML3020N_chr6-209980-34769899-LOH_clusters.pdf
ML3020T1--ML3020N_chrom_clust_freqs.txt
ML3020T1--ML3020N_cluster_summary.txt
ML3020T1--ML3020N_mol_time_estimate.pdf
and
$ ls./DL_HTMCP
HTMCP-01-01-00451-01A-01D--HTMCP-01-01-00451-11B-01D_boots.txt
I honestly do not know how to do that with bash shell!
|
You can use awk to print the 4th and 9th fields:
$ awk '{print $4,$9}' meta.data
81-52884T DL_M
ML3020T1 ML_K
10-24757T CL_GC
HTMCP-01-01-00451-01A-01D DL_HTMCP
Next, pass that to read and assign each field to a variable. Then, create the target directories (use mkdir -p so that it won't complain if the directorty already exists), and move any file names starting with the prefix (4th field) into the directory name given in the 9th field:
awk '{print $4,$9}' meta.data |
while read prefix dirname; do
mkdir -p -- "$dirname" && mv -- "$prefix"* "$dirname";
done
| moving files into different directories based on their names matching with another file |
1,586,785,493,000 |
I am using Linux and I want to write a shell script that takes two directories and moves the second directory in the first one (so the second directory becomes a subdirectory of the first one) and all the files from the second directory become ".txt" extension. For exemple: dir2 contains:
file1
file2
dir3
file3.jmp
After running ./shell_scrip dir1 dir2, I want dir1 to contain dir2 and dir2 would look like this:
file1.txt
file2.txt
dir3
file3.txt
I tried to change the extensions but I got this error:
mv: cannot stat `file1`: No such file or directory
using the following code:
#!/bin/sh
for file in $2/*;
do
f=$(basename "$file")
mv "$f" "${f}.txt"
done
|
You're not moving or referencing dir2.
Try something like this:
#!/bin/sh
mv "$2" "$1" || exit # Make $2 a subdirectory of $1
cd "$1/$(basename "$2")" || exit # Change directories for simplicity
for f in *; do
mv "$f" "${f%.*}.txt" # Add or change the extension
done
Adding || exit after the mv and cd commands will cause the script to exit if the command fails, which gives a little protection in case things aren't what you expect.
The expression ${f%.*} is the same as $f if there's no period in the name. Otherwise it removes the period (the last period) and everything after it.
| How to change the extension of all files from a directory? |
1,586,785,493,000 |
I'm trying to upload all files in a directory to s3 using dates in the filenames as parameters to create the s3 locations. Here's what I have so far.
for file in /home/ec2-user/clickparts/t*; do
year="${file:9:4}"
month="${file:14:2}"
day="${file:17:2}"
aws s3 cp "$file" s3://mybucket/json/clicks/clickpartition/$year/$month/$day/
done
Below is the output for the file "the_date=2017-05-04"
upload: ./the_date=2017-05-04 to s3://mybucket/json/clicks/clickpartition/-use//c/ic//the_date=2017-05-04
I want to put the file in
s3://mybucket/json/clicks/clickpartition/2017/05/04/the_date=2017-05-04
|
Given a file "the_date=2017-05-04", your for loop will set the file variable to /home/ec2-user/clickparts/the_date=2017-05-04. If you take 4 characters from the 9th character, you get -use, which is what you see where your year variable is used.
One way to fix this is to take account of the number of characters in your path, and add the number of characters (in this case 26) to each of the start numbers when setting your year month and day variables.
Another way might be to change to the appropriate directory before the for loop (and change back after it finishes), then your for loop becomes for file in t*; do, which would set your file variable to what I believe you are expecting.
| Use substrings of filename as parameters in for loop that builds aws command |
1,586,785,493,000 |
I need to rename a bunch of files (more than 100) on an Ubuntu system, and want to know how to I do that when the pattern of the files is something like "Filename_01.jpg" to "NameOfFile_01.jpg" In Windows, I would type:
ren Filename_*.jpg NameOfFile*.jpg
Because of the convoluted way the various commands I have found (rename, mmv, etc.) work, and the syntax examples, I can't make heads or tails of those commands. I don't need to full explanation of how the command works, I just need the exact syntax to do this.
|
One approach (depending on the distro the rename will have a different syntax, this is from the default in the debian family):
tink@box1:~/tmp$ ls
ranting filename_17.jpg filename_27.jpg filename_37.jpg filename_47.jpg
blub filename_18.jpg filename_28.jpg filename_38.jpg filename_48.jpg
ds_words.de filename_19.jpg filename_29.jpg filename_39.jpg filename_49.jpg
ds_words.es filename_1.jpg filename_2.jpg filename_3.jpg filename_4.jpg
filename_10.jpg filename_20.jpg filename_30.jpg filename_40.jpg filename_50.jpg
filename_11.jpg filename_21.jpg filename_31.jpg filename_41.jpg filename_5.jpg
filename_12.jpg filename_22.jpg filename_32.jpg filename_42.jpg filename_6.jpg
filename_13.jpg filename_23.jpg filename_33.jpg filename_43.jpg filename_7.jpg
filename_14.jpg filename_24.jpg filename_34.jpg filename_44.jpg filename_8.jpg
filename_15.jpg filename_25.jpg filename_35.jpg filename_45.jpg filename_9.jpg
filename_16.jpg filename_26.jpg filename_36.jpg filename_46.jpg
tink@box1:~/tmp$ rename -e 's/filename_/NameOfFile_/' *jpg
tink@box1:~/tmp$ ls
ranting NameOfFile_17.jpg NameOfFile_27.jpg NameOfFile_37.jpg NameOfFile_47.jpg
blub NameOfFile_18.jpg NameOfFile_28.jpg NameOfFile_38.jpg NameOfFile_48.jpg
ds_words.de NameOfFile_19.jpg NameOfFile_29.jpg NameOfFile_39.jpg NameOfFile_49.jpg
ds_words.es NameOfFile_1.jpg NameOfFile_2.jpg NameOfFile_3.jpg NameOfFile_4.jpg
NameOfFile_10.jpg NameOfFile_20.jpg NameOfFile_30.jpg NameOfFile_40.jpg NameOfFile_50.jpg
NameOfFile_11.jpg NameOfFile_21.jpg NameOfFile_31.jpg NameOfFile_41.jpg NameOfFile_5.jpg
NameOfFile_12.jpg NameOfFile_22.jpg NameOfFile_32.jpg NameOfFile_42.jpg NameOfFile_6.jpg
NameOfFile_13.jpg NameOfFile_23.jpg NameOfFile_33.jpg NameOfFile_43.jpg NameOfFile_7.jpg
NameOfFile_14.jpg NameOfFile_24.jpg NameOfFile_34.jpg NameOfFile_44.jpg NameOfFile_8.jpg
NameOfFile_15.jpg NameOfFile_25.jpg NameOfFile_35.jpg NameOfFile_45.jpg NameOfFile_9.jpg
NameOfFile_16.jpg NameOfFile_26.jpg NameOfFile_36.jpg NameOfFile_46.jpg
On Ubuntu you need to install the rename package:
sudo apt install rename
| How to rename files in Linux like the ren command in Windows |
1,404,054,539,000 |
Suppose I have a PDF and I want to obtain whatever metadata is available for that PDF. What utility should I use?
I find the piece of information I am usually most interested in knowing is the paper size, something that PDF viewers usually don't report. E.g. is the PDF size letter, legal, A4 or something else? But the other information available may be of interest too.
|
One of the canonical tools for this is pdfinfo, which comes with xpdf, if I recall. Example output:
[0 1017 17:10:17] ~/temp % pdfinfo test.pdf
Creator: TeX
Producer: pdfTeX-1.40.14
CreationDate: Sun May 18 09:53:06 2014
ModDate: Sun May 18 09:53:06 2014
Tagged: no
Form: none
Pages: 1
Encrypted: no
Page size: 595.276 x 841.89 pts (A4)
Page rot: 0
File size: 19700 bytes
Optimized: no
PDF version: 1.5
| Discovering metadata about a PDF |
1,404,054,539,000 |
I am writing a bash script that I want to echo out metadata (length, resolution etc.) of a set of videos (mp4) into a file.
Is there a simple way to get this information from an MP4 file?
|
On a Debian-based system (but presumably, other distributions will also have mediainfo in their repositories):
$ sudo apt-get install mediainfo
$ mediainfo foo.mp4
That will spew out a lot of information. To get, for example, the length, resolution, codec and dimensions use:
$ mediainfo "The Blues Brothers.mp4" | grep -E 'Duration|Format |Width|Height' | sort | uniq
Duration : 2h 27mn
Format : AAC
Format : AVC
Format : MPEG-4
Height : 688 pixels
Width : 1 280 pixels
| Get metadata from a video in the terminal |
1,404,054,539,000 |
It is well-known that empty text files have zero bytes:
However, each of them contains metadata, which according to my research, is stored in inodes, and do use space.
Given this, it seems logical to me that it is possible to fill a disk by purely creating empty text files. Is this correct? If so, how many empty text files would I need to fill in a disk of, say, 1GB?
To do some checks, I run df -i but this apparently shows the % of inodes being used(?) rather than how much they weigh.
Filesystem Inodes IUsed IFree IUse% Mounted on
udev 947470 556 946914 1% /dev
tmpfs 952593 805 951788 1% /run
/dev/sda2 28786688 667980 28118708 3% /
tmpfs 952593 25 952568 1% /dev/shm
tmpfs 952593 5 952588 1% /run/lock
tmpfs 952593 16 952577 1% /sys/fs/cgroup
/dev/sda1 0 0 0 - /boot/efi
tmpfs 952593 25 952568 1% /run/user/1000
/home/lucho/.Private 28786688 667980 28118708 3% /home/lucho
|
This output suggests 28786688 inodes overall, after which the next attempt to create a file in the root filesystem (device /dev/sda2) will return ENOSPC ("No space left on device").
Explanation: on the original *nix filesystem design, the maximum number of inodes is set at filesystem creation time. Dedicated space is allocated for them. You can run out of inodes before you run out of space for data, or vice versa. The most common default Linux filesystem ext4 still has this limitation. For information about inode sizes on ext4, look at the manpage for mkfs.ext4.
Linux supports other filesystems without this limitation. On btrfs, space is allocated dynamically. "The inode structure is relatively small, and will not contain embedded file data or extended attribute data." (ext3/4 allocates some space inside inodes for extended attributes). Of course you can still run out of disk space by creating too much metadata / directory entries.
Thinking about it, tmpfs is another example where inodes are allocated dynamically. It's hard to know what the maximum number of inodes reported by df -i would actually mean in practice for these filesystems. I wouldn't attach any meaning to the value shown.
"XFS also allocates inodes dynamically. So does JFS. So did/does reiserfs. So does F2FS. Traditional Unix filesystems allocate inodes statically at mkfs time, and so do modern FSes like ext4 that trace their heritage back to it, but these days that's the exception, not the rule.
"BTW, XFS does let you set a limit on the max percentage of space used by inodes, so you can run out of inodes before you get to the point where you can't append to existing files. (Default is 25% for FSes under 1TB, 5% for filesystems up to 50TB, 1% for larger than that.) Anyway, this space usage on metadata (inodes and extent maps) will be reflected in regular df -h"
– Peter Cordes in a comment to this answer
| Can I run out of disk space by creating a very large number of empty files? |
1,404,054,539,000 |
If I export an image with lets say 300 DPI and I read out its meta-info with any application that can do it (like file, exiftool, identify,mediainfo etc.), I always get a value showing Image-Width and Image-Height.
In this case: 2254 x 288
how do I get the 300 DPI value, or the corresponding value from any other image file?
Since in my case the proportional value of Image-Width and Image-Height does not matter I want to be able to check the resolution of any image to be able to compile new images with the same quality independent of their proportion, since this varies on every file.
For my workflow I'm especially interested in any command line solution, though any others are of course highly appreciated too.
|
You could use identify from imagemagick:
identify -format '%x,%y\n' image.png
Note however that in this case (a PNG image) identify will return the resolution in PPCM (pixels per centimeter) so to get PPI (pixels per inch) you need to add -units PixelsPerInch to your command (e.g. you could also use the fx operator to round value to integer):
identify -units PixelsPerInch -format '%[fx:int(resolution.x)]\n' image.png
There's also exiftool:
exiftool -p '$XResolution,$YResolution' image.png
though it assumes the image file has those tags defined.
| How to get the DPI of an image file (PNG) |
1,404,054,539,000 |
I am using Trisquel 7.0 with Nautilus 3.10.1 installed.
Whenever I display properties of a file, I've one file-specific tab like: Image,Audio/Video,Document etc. which displays special information about it.
Example for a Image:
Example for a PDF Document:
How does Nautilus get this type of file-specific information?
And how do I print this information (MetaData) with te command-line?
|
For the first level of information in the command line, you can use file.
$ file gtu.pdf
gtu.pdf: PDF document, version 1.4
For most formats, and more detailed information, you can also use
Exiftool:
NAME
exiftool - Read and write meta information in files
SYNOPSIS
exiftool [OPTIONS] [-TAG...] [--TAG...] FILE...
exiftool [OPTIONS] -TAG[+-<]=[VALUE]... FILE...
exiftool [OPTIONS] -tagsFromFile SRCFILE [-SRCTAG[>DSTTAG]...] FILE...
exiftool [ -ver | -list[w|f|r|wf|g[NUM]|d|x] ]
For specific examples, see the EXAMPLES sections below.
This documentation is displayed if exiftool is run without an input FILE when one is expected.
DESCRIPTION
A command-line interface to Image::ExifTool, used for reading and writing meta information in a variety of
file types. FILE is one or more source file names, directory names, or "-" for the standard input.
Information is read from source files and printed in readable form to the console (or written to output text
files with -w).
Example:
$ exiftool IMG_20151104_102543.jpg
ExifTool Version Number : 9.46
File Name : IMG_20151104_102543.jpg
Directory : .
File Size : 2.8 MB
File Modification Date/Time : 2015:11:04 10:25:44+05:30
File Access Date/Time : 2015:11:17 18:56:49+05:30
File Inode Change Date/Time : 2015:11:11 14:55:43+05:30
File Permissions : rwxrwxrwx
File Type : JPEG
MIME Type : image/jpeg
Exif Byte Order : Big-endian (Motorola, MM)
GPS Img Direction : 0
GPS Date Stamp : 2015:11:04
GPS Img Direction Ref : Magnetic North
GPS Time Stamp : 04:55:43
Camera Model Name : Micromax A121
Aperture Value : 2.1
Interoperability Index : R98 - DCF basic file (sRGB)
Interoperability Version : 0100
Create Date : 2002:12:08 12:00:00
Shutter Speed Value : 1/808
Color Space : sRGB
Date/Time Original : 2015:11:04 10:25:44
Flashpix Version : 0100
Exif Image Height : 2400
Exif Version : 0220
Exif Image Width : 3200
Focal Length : 3.5 mm
Flash : Auto, Did not fire
Exposure Time : 1/809
ISO : 100
Components Configuration : Y, Cb, Cr, -
Y Cb Cr Positioning : Centered
Y Resolution : 72
Resolution Unit : inches
X Resolution : 72
Make : Micromax
Compression : JPEG (old-style)
Thumbnail Offset : 640
Thumbnail Length : 12029
Image Width : 3200
Image Height : 2400
Encoding Process : Baseline DCT, Huffman coding
Bits Per Sample : 8
Color Components : 3
Y Cb Cr Sub Sampling : YCbCr4:2:0 (2 2)
Aperture : 2.1
GPS Date/Time : 2015:11:04 04:55:43Z
Image Size : 3200x2400
Shutter Speed : 1/809
Thumbnail Image : (Binary data 12029 bytes, use -b option to extract)
Focal Length : 3.5 mm
Light Value : 11.9
There are also specific commands for some type of files, like pdf:
$ pdfinfo gtu.pdf
Title: Microsoft Word - Thermax Ltd
Author: User
Creator: PScript5.dll Version 5.2.2
Producer: GPL Ghostscript 8.15
CreationDate: Tue Jan 27 11:51:38 2015
ModDate: Tue Jan 27 12:30:40 2015
Tagged: no
Form: none
Pages: 1
Encrypted: no
Page size: 612 x 792 pts (letter)
Page rot: 0
File size: 64209 bytes
Optimized: yes
PDF version: 1.4
| How to print Metadata of a file with the help of command-line? |
1,404,054,539,000 |
I have an Asustor NAS that runs on Linux; I don't know what distro they use.
I'm able to log in it using SSH and use all Shell commands. Internal Volume uses ext2, and external USB HDs use NTFS.
When I try to use cp command to copy any file around, that file's date metadata is changed to current datetime.
In example, if I use Windows to copy the file from SMB and the file was modified in 2007, the new file is marked as created now in 2017 but modified in 2007. But with Linux cp command its modified date is changed to 2017 too.
This modified date is very relevant to me because it allows me to sort files on Windows Explore by their modified date. If it's overridden, I'm unable to sort and they all seem to have been created now. I also use modified date to know when I acquired some rare old files.
Is there any parameter I can use in cp command to preserve original file metadata?
Update: I tried cp --preserve=timestamps but it didn't work, it printed:
cp: unrecognized option '--preserve=timestamps'
BusyBox v1.19.3 (2017-03-22 17:23:49 CST) multi-call binary.
Usage: cp [OPTIONS] SOURCE DEST
Copy SOURCE to DEST, or multiple SOURCE(s) to DIRECTORY
-a Same as -dpR
-R,-r Recurse
-d,-P Preserve symlinks (default if -R)
-L Follow all symlinks
-H Follow symlinks on command line
-p Preserve file attributes if possible
-f Overwrite
-i Prompt before overwrite
-l,-s Create (sym)links
If I try just -p it says cp: can't preserve permissions of '...': Operation not permitted, but as far as I've tested, timestamps are being preserved.
|
If you use man cp to read the manual page for the copy command you'll find the -p and --preserve flags.
-p same as --preserve=mode,ownership,timestamps
and
--preserve[=ATTR_LIST] preserve the specified attributes (default: mode,ownership,timestamps), if possible additional attributes: context, links, xattr, all
What this boils down to is that you should use cp -p instead of just cp.
| cp losing file's metadata |
1,404,054,539,000 |
In a shell, how to automatically set the modification (or creation) date and time of a Quicktime video file based on the metadata in the file with a single command (or a single command line)? For JPG files, we have exiv2 -T, but is there a similar command for .mov files?
To give an example, let's start with a file video.mov with the following metadata:
$ exiftool video.mov
ExifTool Version Number : 12.57
File Name : video.mov
Directory : .
File Size : 64 MB
File Modification Date/Time : 2023:07:04 02:53:05+02:00
File Access Date/Time : 2023:07:01 11:42:46+02:00
File Inode Change Date/Time : 2023:07:04 02:53:05+02:00
File Permissions : -rw-r--r--
File Type : MOV
File Type Extension : mov
MIME Type : video/quicktime
Major Brand : Apple QuickTime (.MOV/QT)
Minor Version : 0.0.0
Compatible Brands : qt
Media Data Size : 64215615
Media Data Offset : 36
Movie Header Version : 0
Create Date : 2023:07:01 11:42:00
Modify Date : 2023:07:01 11:42:46
Time Scale : 600
Duration : 0:00:45
Preferred Rate : 1
Preferred Volume : 100.00%
Preview Time : 0 s
Preview Duration : 0 s
Poster Time : 0 s
Selection Time : 0 s
Selection Duration : 0 s
Current Time : 0 s
Next Track ID : 6
Track Header Version : 0
Track Create Date : 2023:07:01 11:42:00
Track Modify Date : 2023:07:01 11:42:46
Track ID : 1
Track Duration : 0:00:45
Track Layer : 0
Track Volume : 0.00%
Image Width : 1920
Image Height : 1080
Clean Aperture Dimensions : 1920x1080
Production Aperture Dimensions : 1920x1080
Encoded Pixels Dimensions : 1920x1080
Graphics Mode : ditherCopy
Op Color : 32768 32768 32768
Compressor ID : hvc1
Source Image Width : 1920
Source Image Height : 1080
X Resolution : 72
Y Resolution : 72
Compressor Name : HEVC
Bit Depth : 24
Video Frame Rate : 29.997
Balance : 0
Audio Format : mp4a
Audio Channels : 2
Audio Bits Per Sample : 16
Audio Sample Rate : 44100
Purchase File Format : mp4a
Warning : [minor] The ExtractEmbedded option may find more tags in the media data
Matrix Structure : 1 0 0 0 1 0 0 0 1
Content Describes : Track 1
Media Header Version : 0
Media Create Date : 2023:07:01 11:42:00
Media Modify Date : 2023:07:01 11:42:46
Media Time Scale : 600
Media Duration : 0:00:45
Media Language Code : und
Gen Media Version : 0
Gen Flags : 0 0 0
Gen Graphics Mode : ditherCopy
Gen Op Color : 32768 32768 32768
Gen Balance : 0
Handler Class : Data Handler
Handler Vendor ID : Apple
Handler Description : Core Media Data Handler
Meta Format : mebx
Handler Type : Metadata Tags
Make : Apple
Model : iPhone SE (2nd generation)
Software : 16.5.1
Creation Date : 2023:07:01 13:42:00+02:00
Image Size : 1920x1080
Megapixels : 2.1
Avg Bitrate : 11.3 Mbps
Rotation : 90
The best approach (to set the modification date) I could come up myself with so far is reading the output of
$ exiftool video.mov | grep "Media Modify Date" | cut -f 19-20 -d ' '
, which is, in my example,
2023:07:01 11:42:46
(which is correct as normalized to UTC or GMT because in real life, the video was taken at around 13:42:… CEST), replacing : in the date in output with -, and finally issuing
$ touch -d "2023-07-01 11:42:46 UTC" video.mov
(my wild guess is that saying UTC is better than saying GMT above). This yields, as expected,
$ ls --full-time video.mov | cut -d ' ' -f 6-8
2023-07-01 13:42:46.000000000 +0200
(the machine is in the time zone CEST, hence +0200). The result is what we want (because the time zone in which the video itself was taken was also CEST), but the process of getting there was manual.
How to process the date from the first command sequence (exiftool … -d ' ') automatically, so that we can issue both the first command and the second command (touch …) in a single command line or in a script?
Alternatively, the modification (or creation) time of the .mov video file has to be read from the metadata in the video file and set on the operating system level in some other way. How? (An aside: as the meta-data field Media Modify Date may be all-zeros for some files, e.g., for a file created by ffmpeg, we might need some more programming logic and try to switch to the values of some other fields in such a case, e.g., adding Date/Time Original and Media Duration if they are properly filled.)
Has anyone already done this task, perhaps, and we just need to run an already available program with appropriate parameters?
|
exiftool can set most of files metadata in addition to retrieving it, so it should just be a matter of:
TZ=UTC0 exiftool '-FileModifyDate<MediaModifyDate' ./*.mov
Or:
exiftool -api QuickTimeUTC '-FileModifyDate<MediaModifyDate' ./*.mov
Or recursively (also updating files in subdirectories):
exiftool -api QuickTimeUTC -r -ext mov '-FileModifyDate<MediaModifyDate' .
By default, exiftool interprets those Media timestamps in QuickTime MOV files as local time even though the QuickTime specification says they're meant to be in UTC as that's what most cameras do. With TZ=UTC0, we tell it the local time is UTC (is 0 offset from UTC), with -api QuickTimeUTC, we tell it those times are meant to be interpreted as UTC as that seems to be the case for you. Both should achieve the same result.
Files without a MediaModifyDate would end up with a 1904-01-01 00:00:00 +0000 timestamp (-2082844800 epoch time) as that's the origin time for QuickTime timestamps.
To skip those, you can do:
TZ=UTC0 exiftool -d %s -if '$MediaModifyDate != -2082844800' \
-r -ext mov '-FileModifyDate<MediaModifyDate' .
(using TZ=UTC0 instead of -api QuickTimeUTC as the latter doesn't seem to work when combined with -d %s specifically to format the time as Unix epoch time, which looks like a bug. Using other date formats would be timezone dependant so we couldn't compare to a fixed string like -2082844800)
If you wanted to use touch to set the mtime, you'd do:
mtime=$(
exiftool -api QuickTimeUTC -q -d %s -p '$MediaModifyDate' file.mov
) &&
[ "$mtime" != '0000:00:00 00:00:00' ] &&
touch -d "@$mtime" file.mov
(using -api QuickTimeUTC does seem to work OK with -d %s when it's just about printing that number here. And yes, you do get 0000:00:00 00:00:00 and not -2082844800 when there's no MediaModifyDate. I guess that could change in the future).
| How to automatically set the modification (or creation) time of a Quicktime video file based on its meta data? |
1,404,054,539,000 |
when opening a particular pdf file, evince decides to open it in "presentation mode".
I see in the man page that evince has -s option to open in presentation mode, but I did not invoke it. I am simply opening all pdfs as evince file.pdf
Somehow evince has decided on its own, to open this particular kind of pdf in presentation mode.
Other pdfs open just fine in normal window.
How can I disable this behavior?
|
Somehow evince has decided on its own, to open this particular kind of pdf in this idiotic mode.
The "decision" is due to a PDF feature:
/PageMode /FullScreen
You may wish to permanently disable it in the PDF itself by replacing it with
/PageMode /UseNone
... filling with 3 spaces at the end to keep filesize.
Note that evince stores the last presentation mode of a file so you might have to switch the previously viewed PDF to windowed mode first.
Very useful for frequently used PDFs. But for this question its only a workaround.
An example for how to patch:
sed -zE 's|(<<[^>]*/PageMode\s*)/FullScreen|\1/UseNone |' Your.PDF > YourPatched.PDF
I would not do -i inplace as this search string is not fully failsafe for all thinkable cases.
| evince opens pdf in "presentation mode" |
1,404,054,539,000 |
I'm using RHEL7 on my Server, and I have a directory containing thousands of myriads of mixed .mp3 files and I need a script helping me to clean up that chaos.
Let’s pretend 10 of my songs are for Miley Cyrus, 10 are for Ed Sheeran, 10 for Beethoven, 10 for Mozart and etc. All mp3 files are holding a numeric file name like 000.mp3, 001.mp3, and etc.
Now I want to write a script to read metadata of all those .mp3 files, and mv each file to a new created directory by the name of the singer. Ed Sheeran’s to Ed Sheeran directory.
How can I have this by writing a shell script, or perl script?
|
With exiftool:
exiftool '-Directory<Artist' ./*.mp3
Recursively:
exiftool -ext mp3 '-Directory<$Directory/$Artist' -r .
| Shell Script reading metadata of a file and then mv each to a new directory |
1,404,054,539,000 |
Linux already stores a lot of metadata with files. e.g. owner, permissions, file's name, checksums (in some file systems), along with essential data like location on disk.
Is there any filesystem (e.g. btrfs, zfs, ext*) which allows you to store extra metadata?
|
Yes, most of the modern ones support extended attributes that can be used to store custom metadata: EXT4, Btrfs, ReiserFS, JFS, and ZFS. Check this question: What does mounting a filesystem with user_xattr do? and this page.
If you meant fork (it is like a companion file, that's kept together with the main one inside the filesystem), then some implementations of ZFS are also an option.
| Filesystems which allow storing custom metadata on files? |
1,404,054,539,000 |
I am trying to understand file/dir permissions in Linux.
A user can list the files in a directory using
cd test
ls -l
Even if the user issuing above commands does not have read, write or execute permission on any of the files inside the test directory, still he can list them because he/she has read permissions on the test directory.
Then why in the following scenario user B can change permissions of a file he owns but does not have write permissions of the parent directory?
User A, makes a test directory and gives other users ability to write in it:
mkdir test
chmod o+w test
User B, creates a file in test folder.
cd test
touch b.txt
User A removes write permission of others from the directory
chmod o-w test
User B, can successfully change permissions, even though permissions are part of directory and this user does not have write permission on the parent directory of the file he owns
chmod g-r b.txt
why does chmod not fail since the user cannot modify the directory which has the file information - permissions etc?
|
When you change a file's metadata (permissions, ownership, timestamps, …), you aren't changing the directory, you're changing the file's inode. This requires the x permission on the directory (to access the file), and ownership of the file (only the user who owns the file can change its permissions).
I think this is intuitive if you remember that files can have hard links in multiple directories. The directory contains a table that maps file names to inodes. If a file is linked under multiple names in multiple directories, that's still one inode with one set of permissions, ownership, etc., which shows that the file's metadata is in the inode, not in the directory.
Creating, renaming, moving or deleting a file involves modifying the directory, so it requires write permission on the directory.
| Why does chmod succeed on a file when the user does not have write permission on parent directory? |
1,404,054,539,000 |
Is there are way to use utility like diff to find the difference in metadata of two identical file hierarchies? If I have two identical file structures like
root_folder/
file1
file2
folder1/
file3
The diff utility will usually exit as though they are identical but adding them to tarballs will produce different hashes. This indicates differences in metadata like timestamps, ownership, etc. but I would like to know what are the exact differences and the default behavior of diff doesn't help me there.
|
I highly recommend diffoscope in this sort of situation.
You can run it before creating the tarballs, as
diffoscope dir1 dir2
to find the differences between the two directories (including metadata), or after creating them, as
diffoscope tarball1.tar tarball2.tar
to find the differences between the two tarballs.
| Diffing for metadata differences |
1,404,054,539,000 |
I was monitoring a directory containing downloads from Google Chrome with ls -la and got this in the output:
-????????? ? ? ? ? ? 'Unconfirmed 784300.crdownload'
I've never seen such question marks in the output.
There were other files in the directory with normal metadata output. When I ran ls -la again the output was all normal; the file still had the same name but the metadata was now visible. Later when the download finished the file was renamed to its final name, as expected.
I checked /var/log/syslog and dmesg output and didn't see any kernel messages.
I wonder if I hit some race condition? I wonder if there is a brief moment after the file is first created where the information is not yet available?
ext4 filesystem with seemingly standard mount options (rw,relatime,errors=remount-ro), 5.4.0-59-generic kernel on Ubuntu 20.04.1 LTS
|
That's a (temporary?) file which had disappeared in the time between ls reading its directory entry and trying to get the metadata from its inode.
You can reproduce that by stopping ls just before it calls lstat on a file, removing that file, and then letting it continue:
$ mkdir dir; touch dir/file
$ gdb -q ls
Reading symbols from ls...(no debugging symbols found)...done.
(gdb) br __lxstat
Breakpoint 1 at 0x4200
(gdb) r -l dir
...
Breakpoint 1, __GI___lxstat (vers=1, name=0x7fffffffdfca "dir",
buf=0x55555557c538) at ../sysdeps/unix/sysv/linux/wordsize-64/lxstat.c:34
(gdb) c
...
Breakpoint 1, __GI___lxstat (vers=1, name=0x7fffffffd3f0 "dir/file",
buf=0x55555557c538) at ../sysdeps/unix/sysv/linux/wordsize-64/lxstat.c:34
...
(gdb) shell rm dir/file
(gdb) c
...
/usr/bin/ls: cannot access 'dir/file': No such file or directory
total 0
-????????? ? ? ? ? ? file
wonder if I hit some race condition?
Kind of, but not really. It's simply the fact that ls does not hold a lock on the filesystem while it does its stuff ;-)
In any case, this is not a symptom of filesystem corruption or anything like that.
| Question marks in ls metadata output? |
1,404,054,539,000 |
On a Linux system,
I have a bunch of MP4 files named like 20190228_155905.mp4 but with no metadata. I've previously had a similar problem with some jpg's which I solved manually with
exiv2 -M"set Exif.Photo.DateTimeOriginal 2018:09:18 20:11:04" 20180918_201104.jpg
but as far as I can see, the DateTimeOriginal is only for images, not videos. Videos that do have metadata have a Xmp.video.MediaCreateDate field that seems like what I want. I guess it contains a Unix timestamp, so I'd need a way to get the date from the filename, convert it to a Unix timestamp and set that value to Xmp.video.MediaCreateDate. Is that all correct? Or am I overcomplicating things?
Edit:
If I wasn't clear, I want to set creation date metadata on mp4 files using its filename that contains the date, so that programs can sort all my media files by their metadata
|
This uses ffmpeg (sudo apt install ffmpeg to install) and works on your exact file names. It replaces your old files with new ones with the metadata set. Maybe try WITHOUT the && mv "~$f" "$f" part first:
$ for f in *.mp4; do ffmpeg -i "$f" -metadata creation_time="${f:0:4}-${f:4:2}-${f:6:2} ${f:9:2}:${f:11:2}:${f:13:2}" -codec copy "~$f" && mv "~$f" "$f"; done
Check metadata with:
$ ffprobe -v quiet 20190228_155905.mp4 -print_format json -show_entries stream=index,codec_type:stream_tags=creation_time:format_tags=creation_time
| Batch set MP4 create date metadata from filename |
1,404,054,539,000 |
I have a drone that I used to make a flight movie, and I am going to use this footage to build a DEM (digital elevation model) of the topography I was filming. I can extract frames from the movie easily enough, but the method (ffmpeg) does not give these frames the lat-lon-elev-etc information necessary to reliably build the DEM. All this data is available in a .csv file stored in the drone flight control app, which I have downloaded.
I want to extract from this .csv file all the columns of navigational data. I can do this using awk. Then I want to write a script that will attach the navigational data from a certain timestamp in the flightpath to a corresponding still frame extracted from the movie (at the same timestamp). I can use exiftool for attaching GPS data to an image, but being quite new to shell scripting I cannot get my current nested loop to work.
Currently, my script writes all lines from the .csv file to every picture in the folder. Instead, I want to write line1 (lat-lon-elev-etc) to photo1, line2 to photo2, and so on. I feel I should be able to fix this, but can't crack it: any help very welcome!
# Using awk, extract the relevant columns from the flightpath dataset
awk -F, '{print $1,$2,$3,$7,$15,$22,$23 }' test.csv > test2.csv
# Read through .csv file line-by-line
# Make variables that can be commanded
while IFS=" " read -r latitude longitude altitude time compHeading gimbHeading gimbPitch
do
# exiftool can now command these variables
# write longitude and latitude to some photograph
for photo in *.png; do
exiftool -GPSLongitude="$longitude" -GPSLatitude="$latitude" *.png
done
# Following line tells bash which textfile to draw data from
done < test2.csv
|
If you have the same number of photos as there are lines in the CSV file, then you can use a simple for loop:
for photo in *.png; do
IFS=" " read -r latitude longitude altitude time compHeading gimbHeading gimbPitch
exiftool -GPSLongitude="$longitude" -GPSLatitude="$latitude" "$photo"
done < test2.csv
| Shell script to add different GPS data to series of photos |
1,404,054,539,000 |
I know how to change a tag value, and how to extract tag values of a file from its metadata, and yes we have great tools like id3tag, exiftool, ffmpeg and etc.
But I need to add a completely new tag, not change an existing one.
For example, consider a situation that we have a .mp3 file and it has 4 tags for its metadata:
1. Artist
2. Album
3. Genre
4. File Size
What I need, is to add a new tag (fifth tag) called Audio Bitrate. Is it possible? If yes so, how should it be done?
Thanks in advance
|
TL;DR You cannot define your own ID3Tags, you must us the ones defined in the spec. Since a tag for Audio Bitrate is not defined, you're out of luck. That is not a problem with other audio containers (ones which use a different tag/comment system).
Your major problem is that ID3 tags are a fixed specification. The best you can get is to write inside the UserDefinedText tag. Let's try this using ffmpeg, let's use the anthem of Brazil which I find quite amusing (and it is copyright free) as an example:
$ wget -O brazil.mp3 http://www.noiseaddicts.com/samples_1w72b820/4170.mp3
$ exiftool -s brazil.mp3
...
Emphasis : None
ID3Size : 4224
Title : 2rack28
Artist :
Album :
Year :
Comment :
Genre : Other
Duration : 0:01:10 (approx)
OK, we already have some tags in there. ffmpeg time:
$ ffmpeg -i brazil.mp3 -c:a copy -metadata Artist=Someone -metadata MyOwnTag=123 brazil-tags.mp3
$ exiftool -s brazil-tags.mp3
ExifToolVersion : 10.20
...
Emphasis : None
ID3Size : 235
Title : 2rack28
Artist : Someone
UserDefinedText : (MyOwnTag) 123
EncoderSettings : Lavf57.41.100
Album :
Year :
Comment :
Genre : Other
Duration : 0:01:11 (approx)
To make a comparison against a more flexible format (you should actually use some encoder parameters to get decent audio, but we are not interested in audio):
$ ffmpeg -i brazil.mp3 brazil.ogg
$ exiftool -s brazil.ogg
...
Vendor : Lavf57.41.100
Encoder : Lavc57.48.101 libvorbis
Title : 2rack28
Duration : 0:00:56 (approx)
And now tagging with ffmpeg:
$ ffmpeg -i brazil.ogg -c:a copy -metadata MyOwnTag=123 -metadata MyExtraThing=Yay brazil-tags.ogg
$ exiftool -s brazil-tags.ogg
...
Vendor : Lavf57.41.100
Encoder : Lavc57.48.101 libvorbis
Title : 2rack28
Myowntag : 123
Myextrathing : Yay
Duration : 0:00:56 (approx)
And we have the tags. This is because Vorbis Comments are allowed to be anything, contrary to ID3Tags which have only a number of allowed values (tag names).
You do not need ffmpeg to use Vorbis Comments. vorbiscomment is much simpler to use, for example:
$ vorbiscomment -a -t EvenMoreStuff=Stuff brazil-tags.ogg
$ exiftool -s brazil-tags.ogg
...
Vendor : Lavf57.41.100
Encoder : Lavc57.48.101 libvorbis
Title : 2rack28
Myowntag : 123
Myextrathing : Yay
Evenmorestuff : Stuff
Duration : 0:00:56 (approx)
Extra note: FLAC uses vorbis comments as well.
References:
ID3v2 spec: List of possible ID3v2 tags
| Add a new custom metadata tag |
1,404,054,539,000 |
I have had this problem for a very long time, had several discussions with friends, and tried searching relating info online. All efforts were in vain so I decide to give a shot here.
I have lots of files that I would like to annotate. Not necessarily are they pictures or documents, but also audio/video files. Now, I understand that there are ways to annotate a PDF, and there are ways to add metadata to PDF/mp3/mp4.., but those methods are not enough for me.
More specifically, when it comes to PDF files, usually I would like to take some notes in my favorite format. The current best way I can think of is to create another file with the same name and put them in the same directory (or tar them together), e.g. Learn-How-to-Learn.pdf Learn-How-to-Learn.pdf.note.md. However, I found this method cumbersome, for instance it is hard to always link them together and with their names synced.
When it comes to mp3/mp4 files, I also want to link them to other files that contain my notes. For example,
00:45:37,I would like to listen to this part again,20190610T19:03:56
01:03:55,Donald Knuth made a good point on blah blah,20190610T20:00:03
These examples go on and on.. I feel that this is very useful, and there must be some clever solutions out there. But to my surprise, I haven't found any! Please let me know if I should be clearer.. sincerely I would like to have a beautiful solution. Thank you in advance!
|
Based on your question, it would be something like https://github.com/ljmdullaart/a-notate. Yes, it is written (by me) after you asked this question, and it is inspired on your question.
| Annotating any files |
1,404,054,539,000 |
We need to backup the a/c/mtime-stamps of files and directories, and have the ability to restore them if necessary.
An example command to back up could be:
$ timestamp backup --all-stamps --incl-dir-stamps --recursive out.stampbak file.bin /dir/with/files/
And to restore could be:
$ timestamp restore --all-stamps out.stampbak
Does such a command exist to achieve the results I'm looking for? Nanosecond accuracy would be preferred, or anything else more granular, although I believe most filesystems stop at nanoseconds. POSIX compliance is also important as we need timestamps on both BSD and GNU/Linux systems to be backed/restored.
Edit: To clarify a few things, the filesystems I listed are, for the most part, holding on to files for now until I can ultimately move them to a master ZFS server. This is what made us consider the possibility of preserving time stamps before migrating data to ZFS. Just in case stamps were to change during the transfer process (we use several methods like HTTP, FTP, SMB), we want to ensure they can be restored to what they should be. Even post transfer, it would be beneficial to retain the timestamps as a small backup, so regardless of the fact the files will end up on ZFS, the question is still about backing/restoring timestamps only. Permissions and group information can be omitted from the backup.
The operating systems used for backing up timestamps include macOS Mojave 10.14.6, Ubuntu Server 18.04.5 LTS, Debian 10.2/7.8, FreeBSD 12.2 and CentOS 8. Restoring will mainly be done on the FreeBSD (ZFS server host), I don't think it will be necessary anywhere else. If it helps to mention, I have a few drives with files on NTFS partitions. Is it necessary to run a small Windows instance to back these stamps up properly?
Furthermore I was under the impression most filesystems used atime, ctime and mtime, and no others (I assume this ultimately differs between filesystems). My goal is to save and restore all of them, if possible.
I mentioned POSIX compliance in that, the tool can be used on either platform--I apologize if my description earlier was incorrect or inadequate. I also thought ctime = creation/birth time, I now see it's change time.
The timestamps of most importance to be retained are creation/birth time and modification time. Access time would be nice as well, change time is of least importance but would still help to have. Again, I apologize for the error, I'm still learning.
Edit 2: From this article I learned this:
there’s no file creation timestamp kept in most filesystems – meaning you can’t run a command like “show me all files created on certain date”. This said, it’s usually possible to deduce the same from ctime and mtime (if they match – this probably means that’s when the file was created).
When a new file or directory is created, usually all three times – atime, ctime and mtime – are configured to capture the current time.
Since the destination filesystem will be ZFS where timestamps of files/directories will be restored, it is probably best to at least save atime, ctime, mtime in case the filesystem the timestamps were saved from does not store a birthtime.
|
If you have GNU find and Perl available, you could rig something up with them. The -printf action to find can print the timestamps, and Perl has the utime function to modify them. Or two, the builtin one, and one in the Time::HiRes module, the latter supports subsecond precision.
Some files for testing:
$ find . -printf '%A@ %T@ %p\n'
1631866504.4180281370 1631866763.5380257720 .
1631866763.5380257720 1631866768.2101700800 ./dir
1631866768.2101700800 1631866768.2101700800 ./dir/bar.txt
1631866760.2619245850 1631866760.2619245850 ./foo.txt
Save the timestamps to a file:
$ find . -printf '%A@ %T@ %p\0' > ../times
Let's trash the timestamps:
$ touch -d 1999-01-01 **/*
$ find . -printf '%A@ %T@ %p\n'
1631866771.6022748540 1631866763.5380257720 .
915141600.0000000000 915141600.0000000000 ./dir
915141600.0000000000 915141600.0000000000 ./dir/bar.txt
915141600.0000000000 915141600.0000000000 ./foo.txt
and reload them from the file:
$ perl -MTime::HiRes=utime -0 -ne 'chomp; my ($atime, $mtime, $file) = split(/ /, $_, 3); utime $atime, $mtime, $file;' < ../times
1631866771.6022748950 1631866763.5380258560 .
1631866771.6022748950 1631866768.2101700310 ./dir
1631866768.2101700310 1631866768.2101700310 ./dir/bar.txt
1631866760.2619245050 1631866760.2619245050 ./foo.txt
The file simply contains NUL-terminated fields with the atime, mtime and the filename, and the Perl snippet calls utime for each set of values.
With NUL as terminator, it should be able to deal with arbitrary filenames, but note that I used \n instead of \0 for the printouts here, which won't work with that Perl snippet, since it expects NUL, and not newline.
Also, I completely ignored ctime, since as far as I know, it can't be set with utime()/utimensat() anyway. Same for any birth timestamps.
| Backup and restore timestamps of files/directories? |
1,404,054,539,000 |
I have a recovered dir with mp3 and flac files. The names were lost. So all I got is a mess of around 30,000 files iwth names like f30818304.flac
I played some and see that the tags in the files are intact.
But the thing is, most of it I probably don't need. I just want to see if there's any rarities in those files. So what I need is a way to massively write the tags to a file. Like "Artist - Song", one per line would be enough.
Anyone know of a way to do this ? command line preferably.
|
With eyeD3 (source):
sudo apt install eyed3
find /usr/audio/incoming -name '*.mp3' -exec eyeD3 -t 'New Title' '{}' \; -exec mv '{}' /usr/audio/complete \;
-t STRING, --title=STRING Set the track title.
Or for one folder just
eyeD3 --rename '$artist - $album - $track:num - $title' *.mp3
| How do I get song names and artists from mp3 files in a dir? |
1,404,054,539,000 |
I have a dual booted system, and its been years since I have used the linux side. I am running Fedora15 and am trying to upgrade. I have tried
yum install preupgrade
and I get the error message
Error: Cannot retrieve repository metadata (repomd.xml) for repository: fedora. Please verify its path and try again
I have tried
yum clean all
rm -f /var/lib/rpm/__db*
rpm --rebuilddb
yum update yum
yum update
yum erase apf
sudo sed -i 's/https/http/g' /etc/yum.repos.d/epel.repo
and I am still at a loss. I get a similar error messages.
Error: Cannot retrieve repository metadata (repomd.xml) for repository: updates-testing-debuginfo. Please verify its path and try again
I am hoping to upgrade my system, or to at least update the packages. Does any one have any other suggestions? Let me know if you need additional information.
|
EOL of Fedora 15 was 4 years before - on 2012-06-26 - so you are trying to update non-supported system. There are no updates nor security fixes. I'm afraid, that you are not able to upgrade your system to newest stable Fedora (24) easily and the best solution is to backup all and download and install stable system from scratch.
If you are trying just to update your packages to latest version from Fedora 15, than you can edit files under /etc/yum.repos.d/ and use archived repository from https://archives.fedoraproject.org/pub/archive/fedora/linux/releases/15/Fedora/. You have to edit baseurl and set correct url based on the link:
# /etc/yum.repos.d/fedora.repo
[fedora]
(...)
baseurl=https://archives.fedoraproject.org/pub/archive/fedora/linux/releases/$releasever/Everything/$basearch/os/
(...)
# /etc/yum.repos.d/fedora-updates.repo
[updates]
(...)
baseurl=archives.fedoraproject.org/pub/archive/fedora/linux/updates/$releasever/$basearch/
(...)
But, don't do it. Just reinstall.
| Fedora 15 updates |
1,404,054,539,000 |
A lot of players can get CDDB info about Audio CDs, but I have never seen that in Clementine.
Given the prestige and other qualities of this player It's hard to believe that it lacks this feature.
Is it?
|
Based on perusal of the issues on Clementine's GitHub page, it looks like Clementine should have CDDB support, per Issue#1239 (which is marked as a duplicate of Issue#314).
The closing comments of Issue #314 indicate that Clementine uses the libtunepimp library for talking to the MusicBrainz service for this tagging capability. Unfortunately, per the MusicBrainz page, libtunepimp uses their deprecated web service (see http://wiki.musicbrainz.org/Web_Service for more details). Which means that Clementine may not be getting all of the latest-and-greatest from MusicBrainz.
Now, Issue #314 is more about get automatic song tagging support in general into Clementine, not necessarily CDDB specifically. And indeed, among the open Clementine issues, there are a couple which suggest that CDDB support is lacking/not working as expected (e.g. Issue#3067, Issue#4120).
So I think that the answer to your question of whether Clementine lacks CDDB support is most likely: "Yes, it's CDDB support is lacking enough to be considered missing". Sadly.
| Can Clementine music player fetch CDDB/FreeDB data? |
1,404,054,539,000 |
I want to search for PNG's in a (sub-)folder structure with the meta tag software set to the value GNOME::ThumbnailFactory and delete them with a single bash command.
Have the story behind it, you can skip that if you want:
I scrapped my Ubuntu ext filesystem by formatting the drive, and then decided to save my files with PhotoRec. My problem now is that now I have all my files wildly distributed in some sub-folders, and guess it, the hidden Gnome Thumbnail folder is also evenly distributed in it and way larger than the original files because it also indexed my external harddrive I had mounted on it sometimes. I found out all of them had the PNG Software Tag set to the GNOME::ThumbnailFactory value by looking at some of them with ExifToolGUI in Windows, but I'm not able to find out how I can do that and delete them according to the results with a Linux Command Line Tool, and I'm not very proficient with grep to be honest.
|
You can do this using ImageMagick. Once ImageMagick is installed, use command identify -verbose image.jpg and pick what you want from the output using grep
find / -name "*.png" -exec sh -c '
if identify -verbose "${file}" | grep your_pattern_here
then
echo "${file}" # or do something else here, e.g. rm
fi
' {} \;
| Searching Files according to PNG Meta-Tags |
1,404,054,539,000 |
I have been using unix systems the majority of my life. I often find myself teaching others about them. I get a lot of questions like "what is the /etc folder for?" from students, and sometimes I have the same questions myself. I know that all of the information is available with a simple google search, but I was wondering if there are any tools or solutions that are able to add descriptions to folders (and/or files) that could easily be viewed from the command line? This could be basically an option to ls or a program that does something similar.
I would like there to be something like this:
$ ls-alt --show-descriptions /
...
/etc – Configuration Files
/opt – Optional Software
/bin - Binaries
/sbin – System Binaries
/tmp – Temporary Files
...
Could even take this a step further and have a verbose descriptions option:
$ ls-alt --show-descriptions-verbose /
...
/etc – The /etc directory contains the core configuration files of the system, use primarily by the administrator and services, such as the password file and networking files.
/opt – Traditionally, the /opt directory is used for installing/storing the files of third-party applications that are not available from the distribution’s repository.
/bin - The ‘/bin’ directly contains the executable files of many basic shell commands like ls, cp, cd etc. Mostly the programs are in binary format here and accessible by all the users in the Linux system.
/sbin – This is similar to the /bin directory. The only difference is that is contains the binaries that can only be run by root or a sudo user. You can think of the ‘s’ in ‘sbin’ as super or sudo.
/tmp – This directory holds temporary files. Many applications use this directory to store temporary files. /tmp directories are deleted when your system restarts.
...
I know that there is no default way to do this with ls, and to add such a feature would probably require a lot of re-writing of kernel code to account for the additional data being stored, so I'm not asking how to do this natively necessarily (unless there is an easy way I am overlooking). I am more asking if there is a tool that already exists for educational purposes that enables this sort of functionality? I guess it would take output from ls and then do a lookup to match directory names to descriptions it has already saved somewhere, but I digress.
|
tree --info will do what you want.
You can create .info txt-Files that contain your remarks about
certain files and folders
or groups of files and folders (using wildcards).
tree --info will then show them in the directory listing.
Multi-line comments are possible.
There is also a Global info file in /usr/share/finfo/global_info that contains explanations for the Linux file system (see here). This file shows you also easily how the .info file syntax looks.
The homepage of the software is https://fossies.org/linux/misc/tree-2.1.1.tgz/.
| Is there a way to add a "description" field / meta-data that could viewed in ls output (or an alternative to ls)? |
1,404,054,539,000 |
Today I noticed that tripwire thinks that some Apache configuration files changed yesterday. I know I did not make any changes to those files.
Looking at the info, it shows that only the Inode number changed:
Property: Expected Observed
------------- ----------- -----------
Object Type Regular File Regular File
Device Number 2305 2305
* Inode Number 5770048 5771399
Mode -rw-r--r-- -rw-r--r--
Num Links 1 1
UID root (0) root (0)
GID root (0) root (0)
Size 1055 1055
Modify Time Mon 09 Oct 2017 04:54:54 PM PDT
Mon 09 Oct 2017 04:54:54 PM PDT
Blocks 8 8
CRC32 BSW2x+ BSW2x+
MD5 CqXESieHTV/33Ye6iuaHjk CqXESieHTV/33Ye6iuaHjk
How could the Inode of a file change and nothing else?
|
One way:
cp -p file file.new && mv file.new file
For example:
$ ls -li file
12289 -rw-r--r-- 1 jeff jeff 0 Jun 13 14:24 file
$ cp -p file file.new && mv file.new file
$ ls -li file
12292 -rw-r--r-- 1 jeff jeff 0 Jun 13 14:24 file
Another possibility would be that the file was restored from a backup system (and that backup system restored timestamps).
Another activity that would update the inode number and not touch the contents would be a sed -i command that made no changes, since sed -i use a temporary file for the results which is then renamed to the original at the end.
| Why does a file's Inode number change and nothing else? |
1,404,054,539,000 |
Does anyone know when Unix supports birth/creation time stamps for files and directories? If possible also when first file manager (GUI) displays it by default for users.
For comparison with Windows, Unix like and Linux:
I know from practical experience that since Windows XP (year 2001) in Windows File Manager (GUI) displays it for directories and files.
System 0.97 (Macintosh System Software) (year 1984) in Finder 1.0 for files. For directories I don't know anymore.
iOS 11 (year 2017) the Files app was integrated and shows by default for users Birth/creation time for directories and files.
Some Linux distributions for example.
KDE (since year 2019) in Dolphin.
Linux Mint (since year 2018) in Nemo.
These operating systems have not until today.
Android 11 (year 2020)
many popular Linux distributions for the end users e.g Fedora 33 (year 2020), Ubuntu 20.10 (year 2020).
|
Full support for birth timestamps has three components:
the file system must be able to store them;
the operating system must provide access to them;
end-user software must display them.
In the Unix world, it seems that at least three POSIX-style file systems support birth timestamps:
UFS2, the default in FreeBSD since 2003;
Veritas File System, aka VxFS and JFS on HP-UX, used in HP-UX since at least 1996 (but I’m not sure whether it supported birth timestamps back then);
ZFS, available on Solaris since 2006.
(Non-POSIX-style file systems with support for birth timestamps include FAT and ISO-9660; while Unix has supported these for a long time, I’m ignoring them here since they wouldn’t have influence core APIs much.)
As far as I can tell, neither HP-UX nor Solaris provide a stat-style system call providing access to birth timestamps. FreeBSD provides st_birthtime in struct stat since FreeBSD 5.1; its stat(1) implementation can show this since 5.1 too.
| Since when do Unix systems support birth/creation time (btime/crtime) for files and directories? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.