date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,308,650,780,000 |
I tried this command to compress all the css files in all subdirectories.
find . -iname "*.css*" -exec gzip -c '{}' > '{}'.gz \;
But it only creates a {}.gz file. I ended up using this:
find . -iname "*.css" -exec sh -c "gzip -c '{}' > '{}'.gz" \;
which works well.
The question is why the first one didn't work and second one did?
Note: I could easily use gzip -k switch to keep the source files, but gzip on CentOS 7 does not support it.
|
It's all about when and by which shell the command gets interpreted. In the first, your command line shell interprets the >, making a local file before find even starts. In the second, the subshell does, after find replaces the {}, so it works as expected.
| Find exec - Why {} can't be used as the output file name? [duplicate] |
1,308,650,780,000 |
I want to create a GZIP compressed TAR archive containing the contents of /var/log, but excluding /var/log/messages file.
I tried tar -cvf var/log.tar var/log/ -x *.messages*, but got the error message:
tar: You may not specify more than one `-Acdtrux' or `--test-label' option
Any ideas on how to go about this? I'm pretty confident I can create the archive, just not sure how to exclude the /var/log/messages file.
|
The -x flag (short for --extract) tells tar to unpack an archive. The -c flag (short for --create) tells tar to create an archive. You're passing both flags, which is a contradiction. It looks like you probably want to use the --exclude flag instead of the -x flag, e.g.:
tar -cvf var/log.tar var/log --exclude='*.messages*'
| Trying to Create a TAR Archive, But Exclude .messages Files |
1,308,650,780,000 |
I'm trying to figure out the number of lines that contain the strings "event" and "type". The files I want to search through are in multiple folders and are zipped. I'm able to get an aggregated count of what I'm looking for, but my goal is to have the count displayed for each file. This is what I'm currently using:
zcat
/folder1/{folderA,folderB,folderC}/folder2/folder3/result-2018-05-1* |
grep 'event' | grep 'type' | wc -l
And my output is:
86446
But I want my output to look like:
result-2018-05-10.log.gz: 1000
result-2018-05-11.log.gz: 3000
result-2018-05-12.log.gz: 20000
result-2018-05-13.log.gz: 4446
result-2018-05-14.log.gz: 12000
result-2018-05-15.log.gz: 10000
result-2018-05-16.log.gz: 15000
result-2018-05-17.log.gz: 5000
result-2018-05-18.log.gz: 6000
result-2018-05-19.log.gz: 10000
Any suggestions?
|
For only two tests, this should be enough:
zgrep -E -c 'event.*type|type.*event' /folder1/{folderA,folderB,folderC}/folder2/folder3/result-2018-05-1*
Testing if a line contains type and event is the same as testing if it contains type followed later by event or event followed later by type. This wouldn't scale well if a 3rd test was needed.
Then adding something like | sed 's#^.*/##' should give the exact result in the question.
UPDATE:
For something more generic, a loop seems better, so:
for i in /folder1/{folderA,folderB,folderC}/folder2/folder3/result-2018-05-1*; do
printf '%s: ' "$(basename "$i")"
zcat < "$i" | grep 'filter1' | grep 'filter 2' | grep 'filter3' | wc -l
done
| Finding the Count for a String on Multiple Zipped Files in Multiple Directories (non aggregated) |
1,308,650,780,000 |
I'm trying to optimize Nginx server. I've enabled Gzip_Static_Module to serve pre-compressed gzip static files like CSS, JS, HTML.
Nginx Wiki says that the timestamps of the compressed and uncompressed files match. So, I'm trying to touch the files before Gzipping.
I'm trying to touch and generate *.gz for these static files using the commands below:
$ su -c "find . -type f -name "*.css" -size +1024b -exec sh -c "touch {} \
&& gzip -9v < {} > {}.gz" \;"
$ su -c "find . -type f -name "*.js" -size +1024b -exec sh -c "touch {} \
&& gzip -9v < {} > {}.gz" \;"
$ su -c "find . -type f -name "*.html" -size +1024b -exec sh -c "touch {} \
&& gzip -9v < {} > {}.gz" \;"
but I receive "User {} not found" error.
Can anyone please help me what's wrong in the above command? And is there any better approach for generating *.gz automatically, whenever the timestamp of any uncompressed file changes (Cronjob may be)?
|
I think you're running into a quoting issue because you've got double quotes contained inside the command you're trying to get su -c ... to execute. You might want to try wrapping the whole thing inside single quotes instead.
$ su -c 'find . -type f -name "*.css" -size +1024b -exec sh -c "touch {} && \
gzip -9v < {} > {}.gz" \;'
You can also escape these contained double quotes but this often can become messy to debug so I'd avoid that, but depending on your style/taste preferences is another option as well.
| touch & gzip all HTML, CSS, JS files in a directory recursively |
1,308,650,780,000 |
This question discussed how to remove parentheses from simple uncompressed text files.
The accepted answer suggested the following:
cat in_file | tr -d '()' > out_file
To my observation, however, this answer is not able to produce the desired effect on compressed text files using gzip.
Is there any way to remove parentheses from text files compressed with gzip without uncompressing them?
|
No, at best you can do it without writing the decompressed file to disk, but you do need to decompress it in order to edit it.
zcat in_file.gz | tr -d '()' | gzip -c >out_file.gz
| How to remove parentheses from a compressed (preferably gzipped) text file |
1,308,650,780,000 |
I have a file called file1.atz. I know that it is gzipped. The uncompressed file extension should be .ats. I have read the man pages and I just want to verify I'm uncompressing this correctly, because the program that's supposed to read this file isn't able to (which could be for other reasons, I'm just trying to isolate the problem). So if I do this:
gzip -S atz file1.atz
I'm then left with a file called file1. (note the .). Did I unzip that correctly? Then I just manually rename with a mv file1. file1.ats to get what I want?
|
Yes, but easier would be to decompress it to stdout and then redirect it.
gunzip -c -S atz file1.atz > file1.ats
| How to uncompress a gzip file with a custom extension? |
1,308,650,780,000 |
I have multiple .gz file in a folder and I want to unzip it in other folder through gunzip without deleting parent file. Please help.
|
Do you mean something like gunzip -c folder1/myfile.gz > folder2/myfile?
With the -c option, gunzip keeps the original files unchanged.
If you want to do it for all .gz files in folder1, you could use
cd folder1; for f in *.gz ; do gunzip -c "$f" > ../folder2/"${f%.*}" ; done
| gunzip multiple compressed files to another directory without deleting the .gz files |
1,308,650,780,000 |
I was wondering what the difference is when decompressing using gzip -d and zcat.
Sometimes when I try gzip -d it says unknown suffix -- ignored. However, zcat works perfectly.
|
The zcat equivalent using gzip is gzip -dc, and when used that way, it doesn’t care about the file extension. Both variants decompress their input and output the result to their standard output.
gzip -d on the other hand is intended to decompress a file, storing the uncompressed contents in another file. The output file’s name is calculated from the input’s, by removing its extension; files whose extension doesn’t match one of those handled by gzip are ignored. The documentation says that
gunzip takes a list of files on its command line and replaces each file whose name ends with .gz, -gz, .z, -z, or _z (ignoring case) and which begins with the correct magic number with an uncompressed file without the original extension.
gunzip also recognizes the special extensions .tgz and .taz as shorthands for .tar.gz and .tar.Z respectively.
Files with no extension, or any other extension, are ignored, producing the message you see:
unknown suffix — ignored
| Difference between gzip -d and zcat |
1,308,650,780,000 |
I'm using an production server for loading large data set to Hadoop to access from Hive table.
We are loading subscribers web browsing data of Telecom Sector. We've large number of .csv.gz file (File sizes around 300-500MB) which is compressed using gzip.
Suppose a file is as below:
Filename: dna_file_01_21090702.csv.gz
Contents:
A,B,C,2
D,E,F,3
We unzip 50 or so files and concatenate to one file. For troubleshooting purposes, we append the file name as first column of every row.
So concatenet data file would be:
dna_file_01_21090702.csv.gz,A,B,C,2
dna_file_01_21090702.csv.gz,D,E,F,33
For that purposed written below bash script:
#!/bin/bash
func_gen_new_file_list()
{
> ${raw_file_list}
ls -U raw_directory| head -50 >> ${raw_file_list}
}
func_create_dat_file()
{
cd raw_directory
gzip -d `cat ${raw_file_list}`
awk '{printf "%s|%s\n",FILENAME,$0}' `cat ${raw_file_list}|awk -F".gz" '{print $1}'` >> ${data_file}
}
func_check_dir_n_load_data()
{
## Code to copy data file to HDFS file system
}
##___________________________ Main Function _____________________________
##__Variable
data_load_log_dir=directory
raw_file_list=${data_load_log_dir}/raw_file_list_name
data_file_name=dna_data_file_`date "+%Y%m%d%H%M%S"`.dat
data_file=${data_load_log_dir}/${data_file_name}
##__Function Calls
func_gen_new_file_list
func_create_dat_file
func_check_dir_n_load_data
Now the problem is gzip -d command performing extremely slow. I mean really really slow. If it unzip 50 files and make the concatenated data file the size would be around 20-25GB.
To unzip 50 files and concatenate it to one takes almost 1 hour which is huge. In this rate, its impossible to process all the data generated in a single day.
My production server(VM) is pretty powerful. Total core is 44 and RAM is 256GB.
Also HARD Disk is very good and high performing. IOwait is around 0-5.
How can I faster this process? What is the alternatives of gzip -d. Is there any other way to make the concatenated data file more efficiently. Please note that we need to keep the file name in records for trouble shooting purpose.
Otherwise we could have just use zcat and append to a data file without unzipping at all.
|
There is a lot of disk I/O that could be replaced by pipes. The func_create_dat_file takes a list of 50 compressed files, reads each of them and writes the uncompressed data. It then reads each of the 50 uncompressed data files, and writes it out again with the filename prepended. All of this work is done sequentially so can not take any advantage of your multiple cpus.
I suggest you try
func_create_dat_file()
{
cd raw_directory
while IFS="" read -r f
do
zcat -- "$f" | sed "s/^/${f%.gz}|/"
done < "${raw_file_list}" >> "${data_file}"
}
Here the compressed data is read once from disk. The uncompressed data is written once to a pipe, read once from the pipe and then written once to the disk. The transformation of the data happens in parallel with the reading and so can use 2 cpus.
[Edit] A comment asked to explain the sed "s/^/${f%.gz}|/" part. This is the code to put the filename as a new field at the start of each line. $f is the filename. ${f%.gz} removes .gz from the end of the string. There is nothing special about the | in this context, so ${f%.gz}| is the filename with a trailing .gz removed followed by a |. In sed s/old/new/ is the substitute (replace) command, it takes a regular expression for the old part. ^ as a regular expression matches the start of line, so putting this together it say change the beginning of line to be the filename without a trailing .gz and a |. The | was added to match the OP's program rather than the OP's description. If it really was a CSV (comma separated variable) file, then this should be a comma rather than a vertical bar.
| Alternative of too much slow gzip -d command |
1,308,650,780,000 |
To estract a single .Z file from a given folder I use uncompress file.Z in a terminal and it works flawlessy. If, in the same folder, I want to extract all the .Z files I use uncompress "*.Z" or uncompress '*.Z' or uncompress \*.Z. But they all give the same error:
gzip: *.Z: No such file or directory
(There I have used various forms of quotes just to show that quotes should not be the problem).
Same story if I use the "extended" extension proper of each file, that is file.fitz.Z. How do I uncompress all the .Z files? What is going wrong?
PS: This has already been posted on the SO, but no luck yet (although I could not imagine the question is that hard to answer).
|
It looks like everything you have tried is escaping the special character * causing it to be interpreted literally instead of as a wildcard.
Try using this instead:
uncompress *.Z
"*.Z"
Double quotes will preserve the literal value of the *
Enclosing characters in double quotes (‘"’) preserves the literal value of all characters within the quotes, with the exception of ‘$’, ‘`’, ‘\’, and, when history expansion is enabled, ‘!’.
'*.Z' Single quotes will preserve the literal value of everything
Enclosing characters in single quotes (‘'’) preserves the literal value of each character within the quotes. A single quote may not occur between single quotes, even when preceded by a backslash.
\*.Z An escape (backslash) will also preserve the literal value of *
A non-quoted backslash ‘\’ is the Bash escape character. It preserves the literal value of the next character that follows, with the exception of newline.
| Uncompressing *.Z files in a folder returns error |
1,308,650,780,000 |
Background
I was trying to gzip a folder, and all its files, recursively. I tried a command off the top of my head, and it was obviously wrong.
The result is that the folder, and all its subfolders and files, are still in place, but every file is gzipped.
E.g.
#Folder
- filename.php.gz
- file2.txt.gz
#Subfolder
-filename.php.gz
etc.
I am not actually sure which command caused this. I tried a few, and most resulted in an error. But obviously one of them "worked" but didn't do what I wanted.
I suspect this is the command that caused the issue: gzip -r ocloud/ ocloud.zip
Question
What is the command I would use to reverse this? i.e. To leave all folders and files in place, but to unGzip them?
|
You should be able to use the -d option for that:
gzip -r -d ocloud/ ocloud.zip.gz
| How can I undo an incorrect gzip? |
1,462,966,642,000 |
I have a directory where there are multiple folders, each of folder contains a .gz file.
How can I unzip all of them at once?
My data looks like this
List of folders
A
B
C
D
In every of them there is file as
A
a.out.gz
B
b.out.gz
C
c.out.gz
D
d.out.gz
|
This uses gunzip to unzip all files in a folder and have the ending .out.gz
gunzip */*.out.gz
This will "loop"* through all folders that have a zipped file in them. Let me add an example:
A
a.out.gz
B
b.out.gz
C
c.out.gz
D
d.out.gz
E
e.out
Using the command above, a.out.gz b.out.gz c.out.gz d.out.gz will all get unzipped, but it won't touch e.out since it isn't zipped.
*this is called globbing or filename expansion. You might like to read some more about it here.
| gunzip multiple files |
1,462,966,642,000 |
Let's say I have these series of commands
mysqldump --opt --databases $dbname1 --host=$dbhost1 --user=$dbuser1 --password=$dbpass1
mysqldump --opt --databases $dbname2 --host=$dbhost1 --user=$dbuser1 --password=$dbpass1
mysqldump --opt --databases $dbname3 --host=$dbhost2 --user=$dbuser2 --password=$dbpass2
mysqldump --opt --databases $dbname4 --host=$dbhost2 --user=$dbuser2 --password=$dbpass2
How do put all their outputs(assuming that the output name is $dbhost.$dbname.sql) and put it inside one file named backupfile.sql.gz using only one line of code?
Edit: From comments to answers below, @arvinsim actually wants a compressed archive file containing the SQL dumps in separate files, not one compressed SQL file.
|
In your comment to @tink's answer you want seperate files in the .gz files:
mysqldump --opt --databases $dbname1 --host=$dbhost1 --user=$dbuser1 --password=$dbpass1 > '/var/tmp/$dbhost1.$dbname1.sql' ; mysqldump --opt --databases $dbname2 --host=$dbhost1 --user=$dbuser1 --password=$dbpass1 > '/var/tmp/$dbhost1.$dbname2.sql' ; mysqldump --opt --databases $dbname3 --host=$dbhost2 --user=$dbuser2 --password=$dbpass2 > '/var/tmp/$dbhost1.$dbname3.sql' ; mysqldump --opt --databases $dbname4 --host=$dbhost2 --user=$dbuser2 --password=$dbpass2 > '/var/tmp/$dbhost1.$dbname4.sql' ; cd /var/tmp; tar cvzf backupfile.sql.gz \$dbhost1.\$dbname*.sql
As an alternative for the output filename I would use backupfile.sql.tgz so it is more clear to experienced users this is a tar file that is compressed
You can append rm \$dbhost1.\$dbname*.sql to get rid of the intermediate files
You could use zip as alternative to compressed tar.
I am not sure why you want this as a one-liner. If you just want to issue one command you should put the lines in script and excute from there.
With the 'normal' tools used for something like this (tar, zip), I am not aware of circumventing the intermediate files.
Addendum
If you really do not want intermediate files (and assuming the output fits in memory) you could try something like the following Python program. You can write this as a oneliner ( python -c "from subprocess import checkout; from cStr....), but I really do not recommend that.
from subprocess import check_output
from cStringIO import StringIO
import tarfile
outputdata = [
('$dbhost1.$dbname1.sql', '$dbname1'),
('$dbhost1.$dbname2.sql', '$dbname2'),
('$dbhost1.$dbname3.sql', '$dbname3'),
('$dbhost1.$dbname4.sql', '$dbname4'),
]
with tarfile.open('/var/tmp/backupfile.sql.tgz', 'w:gz') as tgz:
for outname, db in outputdata:
cmd = ['mysqldump', '--opt', '--databases']
cmd.append(db)
cmd.extend(['--host=$dbhost1', '--user=$dbuser1', '--password=$dbpass1'])
out = check_output(cmd)
buf = StringIO(out)
buf.seek(0)
tarinfo = tarfile.TarInfo(name=outname)
tarinfo.size = len(out)
tgz.addfile(tarinfo=tarinfo, fileobj=buf)
Depending on how regular your database and 'output' names are you can further improve on that.
| Chaining mysqldumps commands to output a single gzipped file |
1,462,966,642,000 |
I want to take a huge file data.txt and:
Split it by 256m
Add the first 3 lines of data.txt to each split file
gzip it
data.txt looks like:
aaa
bbb
ccc
<data>
<data>
<data>
...
<few million rows>
<data>
The end result would be that each split file will have the same first 3 lines
aaa
bbb
ccc
The most efficient way I know how to do it is in the memory. If I leave step 2 it would look like:
split --bytes=256M -d -a 3 --filter='gzip > $FILE.gz' data.txt split/data_
But with step 2 I'm not sure how to do it without split the gzip into a separate command, thus writing the files to disk.
|
You can make the filter more complex:
split -C 256M -d -a 3 --filter '(head -n 3 data.txt; cat) | gzip > $FILE.gz' <(tail -n +4 data.txt) data_
This skips the first three lines of data.txt, and then reintroduces them into every split section.
I’m using -C instead of --bytes to ensure that the split files contain full records.
| How to split, add text for each split and gzip file |
1,462,966,642,000 |
I have to gzip multiple files in a directory and rename them. I dont want to zip them into a single zip file. i.e.
gzip:
ABCDEPG01_20171120234905_59977
ABCDEPG02_20171120234905_59978
ABCDEPG03_20171120234905_59979
to:
ABCDEFG_DWH_ABCDEPG01_20171120234905_59977.gz
ABCDEFG_DWH_ABCDEPG02_20171120234905_59978.gz
ABCDEFG_DWH_ABCDEPG03_20171120234905_59979.gz
|
Are you just adding a prefix? Then something like this could do:
prefix=ABCDEFG_DWH_
for f in ABCDEPG*; do
gzip < "$f" > "$prefix$f.gz" && rm -- "$f"
done
| gzip multiple files and rename them |
1,462,966,642,000 |
I want to compress files individually with the particular pattern.
For example, I have directory as below :
/log/log1
-/1/log2/file.1
-/1/log2/file123
-/1/log2/file2.1
/log/log2
-/2/log2/file4.1
-/2/log2/file345
-/2/log2/file3.1
I want to compress all the files having extension .1 inside /log recursively.
The result will look like
/log/log1
-/1/log2/file.gz
-/1/log2/file123
-/1/log2/file2.gz
/log/log2
-/2/log2/file4.gz
-/2/log2/file345
-/2/log2/file3.gz
|
find + bash solution:
find /log -type f -name "*.1" -exec bash -c 'gzip -nc "$1" > "${1:0:-2}.gz"; rm "$1"' _ {} \;
gzip options:
-n - when compressing, do not save the original file name and time stamp by default
-c - write output on standard output; keep original files unchanged
${1:0:-2} - bash's slicing; get filepath with the last 2 characters truncated
| compress file using /bin/gzip with particular patterns |
1,462,966,642,000 |
How to set compression level here?
nice -n 19 tar -czf $out $in
|
nice won't do anything to the level of compression. It will only affect the scheduling of the process by the kernel. All commands below may be prefixed with nice to run them at a different "nice"-level, it will not influence the level of compression, only the time taken to perform the action (if the system is under heavy load).
If you're using GNU tar then the compression program may be set with either the -I or the --use-compress-program options:
$ tar -I "gzip --best" -c -f archive.tar.gz dir
or
$ tar --use-compress-program="gzip --best" -c -f archive.tar.gz dir
Note that the -z option should not be used if you set the compression program explicitly.
The GNU tar manual states that the program used must support an -d option for decompression (if you use these options for decompressing a compressed archive), which gzip does.
With BSD tar it's not possible to specify the level of compression in a similar way, as far as I have seen.
The other way to achieve this is obviously to create an un-compressed archive and then compress that in the way one wants:
$ tar -c -f archive.tar dir
$ gzip --best archive.tar
or
$ tar -c dir | gzip --best -o archive.tar.gz
Yet another way is to set the GZIP environment variable to the flags that you'd like to pass to gzip:
$ export GZIP='--best'
$ tar -c -z -f archive.tar.gz dir
Except for the use of -I and --use-compress-program, the last few alternatives work with both GNU and BSD tar.
| set compression level with nice and tar |
1,462,966,642,000 |
I have a file that is gzipped twice:
test.gz.gz
How do I grep something from the file above? I don't want to unzip it.
|
You may also
$ zcat test.gz.gz | zgrep "whatever"
zcat and zgrep works like cat and grep but on compressed data streams.
| Grep from a doubly-gzipped file |
1,462,966,642,000 |
Why isn't there any difference (output filesize) with these command lines?
Gzip stdin
The output files have same filesize even though the compression levels are different
tar ... | gzip -c -1 > ...
tar ... | gzip -c -9 > ...
xz stdin
The output files have same filesize even though the compression levels are different
tar ... | xz -c -1 > ...
tar ... | xz -c -9 > ...
Gzip
The output files have same filesize even though the compression levels are different
GZ_OPT=-1 tar -zcf ...
GZ_OPT=-9 tar -zcf ...
xz
The output files have same filesize even though the compression levels are different
XZ_OPT=-1 tar -Jcf ...
XZ_OPT=-9 tar -Jcf ...
|
The gzip compression level does not guarantee that higher compression levels result in smaller output. In fact the BUGS section of my man gzip notes that in some cases it can be the opposite.
For xz (and bzip2), this is even more documented, as according to the manual the numerical level controls the amount of memory used by the compressor. Using more memory is supposed to make better compression, but again this is not guaranteed.
Especially if your test data is small, having the same size output is not surprising to me.
| compression levels gz and xz |
1,462,966,642,000 |
I'm running a long copy of .gz files from one server to another. At the same time, I'd like to unzip the files already copied. For example, al filenames that start with a through filenames starting with c.
How can I do this?
|
The easiest way to do this is to use your shell. Assuming a relatively modern shell (such as bash), you can do
gunzip [a-c]*gz
| Gunzip in range of files |
1,462,966,642,000 |
I have one doubt with this:
for i in ./*.gz
do
gunzip -c $i | head -8000 | gzip > $(basename $i)
done
I get an "unexpected end of file" error for gzip, and I don't get the 8000 at all, only a few tens of lines. I've checked the integrity of my files and they are fine. Curiously, if I run for individual files:
gunzip -c file1.gz | head -8000 | gzip > file1.gz
I get the expected result. With this script I can obtain the first 8000 lines from a compressed file with 10708712 lines, and compress again. The new file overwrites the original file, but that's ok.
|
You should write the output to a temporary file and then rename:
for i in ./*.gz
do
gunzip -c "$i" | head -8000 | gzip > "$i.tmp"
mv -f "$i.tmp" $(basename "$i")
done
That your version sometimes works is if you gunzip buffers enough while reading.
| Why I get unexpected end of file in my script when using gzip? |
1,462,966,642,000 |
I have a gz archive but for some reason tar said that the format is incorrect even though I can double click it in mac Finder and extract it normally, and file command shows the same format just like any other tar.gz files
Why is that and how to extract it from the terminal?
$ file archive.gz
archive.gz: gzip compressed data, original size modulo 2^32 4327594
$ tar -xzf archive.gz
tar: Error opening archive: Unrecognized archive format
$ tar --version
bsdtar 3.5.1 - libarchive 3.5.1 zlib/1.2.11 liblzma/5.0.5 bz2lib/1.0.8
|
Your compressed file is probably not a Tar archive.
You can find out the uncompressed filename with gunzip -l archive.gz - that might give you a clue as to the format.
If that doesn't work, then gunzip it, and use file on the uncompressed output.
| Normal gz file not extractable by tar [duplicate] |
1,462,966,642,000 |
I have a simple backup script, which also creates a tar/gzip archive of local data to an external USB device and then copies that archive to a second USB device.
For example:
usb1="/mnt/usbone"
usb2="/mnt/usbtwo"
source="/home/user"
tar cfz ${usb1}/source.tar.gz ${source}
cp -ar ${usb1}/source.tar.gz ${usb2}
This seems like it could be optimized to have tar create copies on both drives, instead of creating one archive, which is copied afterwards. The resulting archive is rather small (<1GB). I am aware that this is not a safe approach for a backup.
Edit: I've quickly tested the solution from Archemar and comapred the approaches. For good measure, I also tested the initial approach with rsync. See results with time (not bash time, /usr/bin/time) and the script I used to test this.
Source is created with dd if=/dev/urandom bs=1M count=1024 of=/tmp/random.blob. Host is a Raspberry Pi 3B running from a microSD card, the mounted targets are USB 2.0 flash drives (${a} and ${b}).
a.sh (tar and cp):
tar cfz ${a}/r1.tar.gz ${s}
cp -ar ${a}/r1.tar.gz ${b}/r1.tar.gz
b.sh (tar and tee):
tar cfz - ${s} | tee ${a}/r2.tar.gz > ${b}/r2.tar.gz
c.sh (tar and rsync):
tar cfz ${a}/r3.tar.gz ${s}
rsync -aW ${a}/r3.tar.gz ${b}/r3.tar.gz
Results:
# /usr/bin/time -v bash a.sh
Command being timed: "bash a.sh"
User time (seconds): 218.71
System time (seconds): 28.33
Percent of CPU this job got: 68%
Elapsed (wall clock) time (h:mm:ss or m:ss): 6:03.13
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 2480
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 41
Minor (reclaiming a frame) page faults: 1250
Voluntary context switches: 45519
Involuntary context switches: 25576
Swaps: 0
File system inputs: 3668157
File system outputs: 4197336
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
# /usr/bin/time -v bash b.sh
Command being timed: "bash b.sh"
User time (seconds): 221.64
System time (seconds): 28.62
Percent of CPU this job got: 85%
Elapsed (wall clock) time (h:mm:ss or m:ss): 4:53.98
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 2536
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 31
Minor (reclaiming a frame) page faults: 1162
Voluntary context switches: 68310
Involuntary context switches: 35582
Swaps: 0
File system inputs: 2101321
File system outputs: 4197832
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
# /usr/bin/time -v bash c.sh
Command being timed: "bash c.sh"
User time (seconds): 235.24
System time (seconds): 35.01
Percent of CPU this job got: 74%
Elapsed (wall clock) time (h:mm:ss or m:ss): 6:04.03
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 2652
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 40
Minor (reclaiming a frame) page faults: 2310
Voluntary context switches: 65402
Involuntary context switches: 45179
Swaps: 0
File system inputs: 4200957
File system outputs: 4197496
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
To my surprise, the results are not that distinct. File system inputs: 2101321 is quite a bit lower with the tar/tee approach though, which I hope is good for the SD card's life.
|
you can use tee command
usb1="/mnt/usbone"
usb2="/mnt/usbtwo"
source="/home/user"
tar -cz -f - "${source}" | tee "${usb2}/source.tar.gz" > "${usb1}/source.tar.gz"
where
-f - tells tar to use stdout as backup file.
tee "${usb2}/source.tar.gz" will copy stdin to specified file and stdout.
> "${usb1}/source.tar.gz" will redirect stdout from tee to file.
| Archive directory with tar/gz to multiple locations and omit an additional copy |
1,462,966,642,000 |
I created an image from a 256GB HDD using following command:
dd if=/dev/sda bs=4M | pv -s 256G | gzip > /mnt/mydrive/img.gz
later I tried to restore the image to another 512GB HDD on another computer using following command:
gzip -d /mnt/mydrive/img.gz | pv -s 256G | dd of=/dev/sda bs=4M
the 2nd command shows very long time zero bytes progress (just counting seconds, but nothing happens) and after some while it fails with error telling me no space left on device.
the problem is in the gzip command, when I unpack the image file to a raw 256GB file xxx.img and restore it without using gzip, it works:
dd if=/mnt/mydrive/xxx.img bs=4M | pv -s 256G | dd of=/dev/sda bs=4M
clearly the problem is in the gzip command (tried as well gunzip, no luck), as a workaround I can restore images using a huge temporary external drive which is annoying. The zipped image is about 10% size of the raw image. Do you have an idea why the gzip is failing?
side note: the problem is not in pv or dd, following command fails with the same error message:
gzip -d /mnt/mydrive/img.gz > /dev/sda
|
Following command is not exactly doing what you are intending to do
gzip -d /mnt/mydrive/img.gz > /dev/sda
The command is decompressing the file /mnt/mydrive/img.gz and creating a file called img which is the ungzipped copy of img.gz. The > /dev/sda is not doing anything useful because nothing is sent to /dev/sda via stdout.
This is what you need to do, send the output to stdout (using -c):
gunzip -c /mnt/mydrive/img.gz > /dev/sda
Or
gunzip -c /mnt/mydrive/img.gz | pv -s 256G | dd of=/dev/sda bs=4M
| restoring hdd image using gzip throws error no space left on device |
1,462,966,642,000 |
I'm trying to make a script that can tail log files from a remote server to my local directory. tail -F is what I'm using but after piping it with gzip, nothing happens although a local copy of the log file is created.
Update:
The script runs but it can't reach the gzip command since I have to type ctrl+c to end the tailing. Thus it ends the script without even zipping it.
to_Tomcat(){
# tail log file -> zips it using gzip
tail -F /sampleRemoteDirectory/logs/tomcat/sample.log > "$TomcatLogFileName"-Tomcat.log | gzip "$TomcatLogFileName"-Tomcat.log
echo ""
echo "...tailing the log file and saving it as $TomcatLogFileName-JBoss.log.gz"
echo ""
}
to_Tomcat TomcatLogFileName
sleep 10
ret=$?
# last note before the user has to exit the shell script
echo ""
echo "Saved file: $TomcatLogFileName-Tomcat.log.gz"
|
tail -f is meant to be interactive, other than timeout you should try tail -100 (100 or whatever) to catch last lines.
main part would be
tail -100 /whatever/sample.log | gzip > /whatever/sample.log.gz
| Writing a Shell script to tail and gzip a log file |
1,462,966,642,000 |
In Unix/Linux is there any max files size limit that a compression utility ( gzip/compress) can compress. I remember years ago it was mentioned in the gzip page that it can compress files up to 4 gb. Actually I need to compress fillies of around 512 GB regularly. I tested few files with compress utility and found hash code(MD5) of the DB files before compress and after un-compress are same.
|
gzip nowadays can compress files larger than 4 GiB in size, and in fact doesn’t have any limit of its own really (you’ll be limited by the underlying file system). The only limitation with files larger than 4 GiB is that gzip -l, in version 1.11 or older, won’t report their size correctly; see Fastest way of working out uncompressed size of large GZIPPED file for an alternative. This has been fixed in gzip 1.12; gzip -l decompresses the data to determine the real size of the original data, instead of showing the stored size.
There are many other compression tools which provide better compression and/or speed, which you might find more appropriate: XZ, 7-Zip...
| Compression Utility Max Files Size Limit | Unix/Linux |
1,462,966,642,000 |
How can I gzip and copy files to another folder keeping its directory structure with one command in Linux?
For example, I have:
/dir1
/dir1/file1.fit
/dir1/file2.fit
/dir1/file3.fit
/dir1/dir2/file1.fit
/dir1/dir2/file2.fit
/dir1/dir2/file3.fit
After I use a command (Lets we say I copy /dir1 to /another_dir), I want to get:
/another_dir/dir1
/another_dir/dir1/file1.fit.gz
/another_dir/dir1/file2.fit.gz
/another_dir/dir1/file3.fit.gz
/another_dir/dir1/dir2/file1.fit.gz
/another_dir/dir1/dir2/file2.fit.gz
/another_dir/dir1/dir2/file3.fit.gz
Here /another_dir is actually another hard drive. Since no enough space in this target drive (it is a data of 2TB!), please do not suggest me to copy the files first and then gzip all (or vice-versa). Similarly, the gz files should not remain in the source folder after the operation.
|
Assuming you're in the root folder where are all directories for compression (in your case /), you can use find along with xargs command, e.g.
find dir1/ -name "*.fit" -print0 | xargs -i% -r0 sh -c 'mkdir -vp "$(dirname "/another_dir/%")" && gzip -vc "%" | tee "/another_dir/%".gz > /dev/null && rm -v "%"'
Note: You can also replace | tee "/another_dir/%".gz > /dev/null with > "/another_dir/%".gz.
This will find all .fit files in dir1/ and pass them to xargs command for parsing where % is replaced with each of your file.
The xargs command will:
create the empty folder (mkdir) with its parents (-p) as a placeholder,
compress given file (%) into standard output (-c) and redirect compressed output to tee,
tee will save the compressed input into .gz file (since tee by default prints the input to the terminal screen, sending it to /dev/null will suppress it, but it'll still save the content into the given file).
After successful compression, remove the original (rm). You can always remove that part, in order to remove them manually after verifying your compressed files.
It is important that you're in relative folder to your dir1/, so all paths returned by find are relative to the current folder, so you don't have to convert absolute paths into relative (this still can be done by realpath, e.g. realpath --relative-to=$absolute $current, but it will just overcomplicate the above command).
On macOS, to use -r argument for xargs, you need to install GNU xargs (brew install xargs) and use gxargs command instead. Similar on other BSD systems.
Related question: gzip several files in different directories and copy to new directory.
| How to gzip and copy files keeping its directory structure? |
1,462,966,642,000 |
I want to exclude two files from a tar file.. But it doesn't seem to work
cd /var/www/public/api_test
tar -zcvf api_v2.x.tar.gz api --exclude "./api/distributor_test.php" --exclude "./api/library/Distributor_API.php"
|
try (in one line, I fold for readability)
tar -zcvf api_v2.x.tar.gz
--exclude api/distributor_test.php
--exclude api/library/Distributor_API.php api
argument (e.g. api) should be put last
you may use --exclude distributor_test.php (if you only have one file named distributor_test.php )
| exclude files in tar.gz |
1,462,966,642,000 |
Here's what I did:
copied some files from server to my local computer
scp root@remotemachine:/var/log/nginx/* /home/me/logs
deleted the files on the server
The next moment I realized, that I forgot to create the target directory on the local machine (/home/me/logs). Now instead of copied files inside 'logs' I see a file called 'logs' that looks like gzip archive, but file-roller doesn't recognize it as a valid gzip archive.
|
In this case scp will copy each source file to /home/me/logs, overwriting /home/me/logs with the contents of each new file.
The result is that /home/me/logs will be a copy of the last source file in the list. All the other source files are lost.
Oops! Regular cp warns and aborts in this case, at least!
| What happens to my files if I scp-ed to non existing directory |
1,462,966,642,000 |
I have a 17 GB tar.gz file which is a tar.gz version of a directory.
I got the following message after the tar operation.
tar: Error exit delayed from previous errors
Now since it takes a long time to download, I just want to be sure that the file is healthy before downloading it. Is there a way to make sure of the file's integrity which does not take a lot of time?
Currently all the checking solutions that I found take a lot of time to complete.
|
You cannot assume the tar.gz file is healthy but IMHO chances it is are quite good.
A very common root cause of the "Error exit delayed from previous errors" is the fact a file has either its size changed while processed or has disappeared just before. Depending on what kind of files your archive contains, this can be a non issue. In any case, tar is properly processing this kind of event and the archives created should be usable.
Next time you create your archive, I would suggest to forget the v (verbose) option or redirect stdout to a file. While it is useful for small archives containing a dozen files or so, it adds a useless noise to tar output and you'll then have the error messages, if any, on screen and not lost in the terminal emulator scrollback.
tar czvf large.tar.gz someDirectory >/tmp/large.tar.list
Of course, there is no 100% guarantee your tar.gz file is healthy. The only way to know it would be to extract it somewhere and see if no error show up. Even if it is the case, there is still the possibility some files weren't saved because of access rights issues, or more odd ones like file names / path names too long or invalid, corrupt file systems, corrupt transmission, etc ...
| How to verify the integrity of a tar.gz file? |
1,462,966,642,000 |
I have a bunch of .gz files I'm checking the integrity after data transfer with gzip -t -v file
the output I'm getting is
gzip: C2_CRRA200017850-1a_H3LJWDSXY_L1_2.fq.gz: extra field of 6 bytes ignored
gzip: C2_CRRA200017850-1a_H3LJWDSXY_L1_2.fq.gz: extra field of 6 bytes ignored
gzip: C2_CRRA200017850-1a_H3LJWDSXY_L1_2.fq.gz: extra field of 6 bytes ignored
gzip: C2_CRRA200017850-1a_H3LJWDSXY_L1_2.fq.gz: extra field of 6 bytes ignored
gzip: C2_CRRA200017850-1a_H3LJWDSXY_L1_2.fq.gz: extra field of 6 bytes ignored
gzip: C2_CRRA200017850-1a_H3LJWDSXY_L1_2.fq.gz: extra field of 6 bytes ignored
....
gzip: C2_CRRA200017850-1a_H3LJWDSXY_L1_2.fq.gz: extra field of 6 bytes ignored
gzip: C2_CRRA200017850-1a_H3LJWDSXY_L1_2.fq.gz: extra field of 6 bytes ignored
gzip: C2_CRRA200017850-1a_H3LJWDSXY_L1_2.fq.gz: extra field of 6 bytes ignored
gzip: C2_CRRA200017850-1a_H3LJWDSXY_L1_2.fq.gz: extra field of 6 bytes ignored
gzip: C2_CRRA200017850-1a_H3LJWDSXY_L1_2.fq.gz: extra field of 6 bytes ignored
gzip: C2_CRRA200017850-1a_H3LJWDSXY_L1_2.fq.gz: extra field of 6 bytes ignored
gzip: C2_CRRA200017850-1a_H3LJWDSXY_L1_2.fq.gz: extra field of 6 bytes ignored
gzip: C2_CRRA200017850-1a_H3LJWDSXY_L1_2.fq.gz: extra field of 6 bytes ignored
gzip: C2_CRRA200017850-1a_H3LJWDSXY_L1_2.fq.gz: extra field of 6 bytes ignored
OK
What do these repeated lines indicate and how I get just the final OK as output?
|
This is normal and there is nothing wrong with the files. It's just that they are bgzip files and not gzip files. Bgzip has some extra fields that gzip doesn't know about:
The BGZF format written by bgzip is described in the SAM format specification available from http://samtools.github.io/hts-specs/SAMv1.pdf.
It makes use of a gzip feature which allows compressed files to be concatenated. The input data is divided into blocks which are no larger than 64 kilobytes both before and after compression (including compression headers). Each block is compressed into a gzip file. The gzip header includes an extra sub-field with identifier 'BC' and the length of the compressed block, including all headers.
So you can just ignore those messages, or you can remove the -v option which isn't needed anyway or you can use bgzip -t instead of gzip -t.
| gzip -t output "gzip: filename.gz: extra field of X bytes ignored" |
1,462,966,642,000 |
A long time ago, before I tried incremental and differential backups, I tried to tar/gzip several similar large (1 GB) directories, but they did not compress any better than tarring and gzipping each directory individually. My guess why it didn't work is this:
tar was likely not going to put duplicate files next to each other
because files were far away, they would be in separate gzip DEFLATE blocks and so not be compressed together (I've also asked how far)
Is this reasoning correct?
|
Yes, your reasoning is correct, as tar doesn't sort files by extensions (which could have helped a lot to achieve higher compression ratios) and gzip is a very old compression algorithm with a relatively modest dictionary, just 32KB.
Please try using xz or p7zip instead.
Here's a compression string which allows me to achieve the highest compression ratio under Linux:
7za a -mx=9 -myx=9 -mfb=273 -bt -slp -mmt4 -md=1536m -mqs archive.7z [list of files]
This requires a ton of memory (at the very least 32GB of RAM). If you remove -mmmt4 and reduce the dictionary size to say 1024m, 16GB would be enough.
Speaking of sorting files for tar. I wrote a script which does just that a few years ago: https://github.com/birdie-github/useful-scripts/blob/master/tar_sorted
| Why won't tar/gzip compress two similar large directories? |
1,462,966,642,000 |
Bit of a weird one - I've got a honeypot running dionaea, which is a tool that consolidates any binaries that were uploaded to the device in single location (/data/dionaea/binaries).
However, every so often (kind of like logrotate), the /data/dionaea/binaries directory gets gzipped into a file called binaries.tgz.n (where n is incremented each time the rotate happens), and then gets gzipped again into a file called binaries.tgz.n.gz.
I know with a normal tgz or gz archive you can list the contents of the archive with tar tzf /path/to/tgz and gzip --list /path/to/gz (or tar zf /path/to/gz) respectively, but is there a way to pipe the embedded archive into a new tar command to list its contents at the same time (instead of having to actually extract the "outside" gz)?
|
You can pipe to tar:
gunzip < /path/to/gz | tar tzf -
(Or with GNU tar, you can just use | tar tz.)
| List contents of a tgz archive inside a gz archive |
1,462,966,642,000 |
I have multiple log files from each day that I need to merge together. Each comes from a different server. The job that puts them there sometimes gets interrupted and files get truncated. In that case the file gets written with a different name next time it runs. So I may end up with a list of log files like:
server-1-log.gz (Yesterday's log file)
server-1-log.1.gz (Today's log file that got interrupted while transferring and is truncated)
server-1-log.2.gz (Today's log file re-transferred and intact)
server-2-log.gz (Yesterday's log file)
server-2-log.1.gz (Today's log file)
All the log files start with a time stamp on each line, so it is fairly trivial to sort and de-duplicate them. I've been trying to merge these files using the command:
zcat *.gz | sort | uniq | gzip > /tmp/merged.gz
The problem is that the truncated log file produces the following error from zcat:
gzip: server-1-log.1.gz: unexpected end of file
It turns out that zcat completely exits when it hits this error, without reading all the data from the other files. I end up losing the data that exists in the other good files because one of the files is corrupt. How can I fix this?
Can I tell zcat not to exit on errors? I don't see anything in the man page for it.
Can I fix truncated gzip files before calling zcat?
Can I use a different decompression program instead?
|
I’m guessing you’re using the gzip script version of zcat. That just executes gzip -dc, which can’t be told to ignore errors and stops when it encounters one.
The documented fix for individual corrupted compressed files is to run them through zcat, so you won’t get much help there...
To process your files, you can either loop over them (with a for loop or xargs as you found), or use Zutils which has a version of zcat which continues processing when it encounters errors.
| Merge possibly truncated gzipped log files |
1,462,966,642,000 |
Given that I:
have a directory that contains over 1000 files
have a gzip'ed tar file that contains a subset of those files (x.tgz)
What is the single command line (if it is possible) that will read the gzip'ed tar file's contents and removes all of the files from the directory that are contained within the tar file?
|
You would need to be at the directory from which the tar file was created, for instance $HOME. Then if you had a tgz of your Documents directory located safely in /backup/Documents.tgz you would do this:
$ for file in $(tar -tzf /backup/Documents.tgz); do \
[[ -f $file ]] && rm $file || echo "$file does not exist"; done
If you want to also delete directories you would use rm -fr $file.
| How to delete files from a directory based on the contents of a gzipped tar file? |
1,462,966,642,000 |
Why does this give such output (both commands are supposed to do the same thing) and how can one make them give identical output?
diff <(cat some_file | gzip -c - | base64) <(gzip -c some_file | base64)
1,2c1,2
< H4sIACSOZFUAA2XNsRHAMAgDwDqZRkIQ8P6L+c5xnIL2m2c5E6BdIQA5cHPTaGTqlI3ki2jSoWrk
< e1Tw0PNSMT4KdPKfJgNiJT++AAAA
---
> H4sICGcqSlUAA2Z0X2FkLnNob3J0AGXNsRHAMAgDwDqZRkIQ8P6L+c5xnIL2m2c5E6BdIQA5cHPT
> aGTqlI3ki2jSoWrke1Tw0PNSMT4KdPKfJgNiJT++AAAA
The contents of the file are:
184170012 53000790
184170019 53000790
184170023 53000790
184170027 53000790
184170034 53001233
184170038 53001233
184170042 53000351
184170046 53000815
184170050 53000815
184170054 53000815
There is a tab character between two columns and new line at the end of each line.
|
gzip is encoding the filename of the input file into its output. Even with -c option it does this. You can see this with gzip -c some_file | strings|head -1. however, when reading from stdin, gzip does not do that, since it doesn't know the filename. You can tell gzip to omit from output the filename and time-stamp with -n.
| gzip files with cat and from pipe gives different results |
1,462,966,642,000 |
Whenever I need to decompress a file which was compressed with Gzip, I need to rename the file with a .gz extension , I tried other compression utility like zip, bzip2 and they don't seem to care about extensions. Why is this is the case with Gzip , isn't just checking magic numbers to identify a type of file or by others means would be useful right?
Or is it like this for a purpose?
|
As I understand it, the general idea is that you can run
gunzip *
and it will use the file extension as a first filter; as described in the man page:
gunzip takes a list of files on its command line and replaces each file whose name ends with .gz, -gz, .z, -z, or _z
(ignoring case) and which begins with the correct magic number with an uncompressed file without the original extension.
gunzip also recognizes the special extensions .tgz and .taz as shorthands for .tar.gz and .tar.Z respectively. When compressing, gzip uses the .tgz extension if necessary instead of truncating a file with a .tar extension.
Another significant feature of gunzip is that it ignores the name stored in the compressed file by default, and extracts to a file whose name is determined by removing the suffix from the file to be decompressed; it can’t do that if the file doesn’t have a known suffix.
Of course filtering using only the magic number would have a similar effect when handling multiple files, but there are a couple of differences:
files with a non-gzip extension which aren’t readable by the current user wouldn’t necessarily cause errors (but they currently do, because gunzip opens files even if it’s not going to try to decompress them);
avoiding the need to check the magic number in every file would have saved a lot of time back when gzip was implemented.
Now that it’s documented in this fashion, it can’t easily be changed (although the gzip maintainers have recently demonstrated their willingness to introduce significant changes in behaviour, e.g. with gunzip -l).
| Why GZIP utility cares about extension? |
1,462,966,642,000 |
Context
I am required to use a poorly designed java application that logs A LOT of information while it is running. Under standard usage, it will create 100s of MB of logs per hour.
I don't need historical logs and it currently seems that the logrotate utility can't keep up with it as it doesn't run frequently enough. The application is closed source and rotates it's own logs at around 36MB.
My Linux distribution is RHEL7.
Question
I'd like to reduce wasted space by compressing and rotating the logs.
As the app already splits out the logs into new files, is it possible to automatically compress newly created files in a directory?
Is it possible to automatically delete all files in the format of assessor-cli.X.log where X is a digit greater than... say 5 (i.e. keep only the 5 most recent logs).
Here is my attempt at a logrotate file:
# cat /etc/logrotate.d/cis_assessor
/usr/share/foreman-proxy/Ansible/CIS/audit/Assessor-CLI-4.0.2/logs/assessor-cli.log {
missingok
notifempty
compress
rotate 5
size 30M
This logrotate job would need to catch the log between the size of 30MB and 36MB to actually come into effect which might only be a 10 second period. That's why I'm asking about the manual path of compressing and deleting files without logrotate.
|
As the app already splits out the logs into new files, is it possible to automatically compress newly created files in a directory?
Yes, it is. Just target the newly created file with something that can watch for new files in a directory (like entr)
So you'll create a logrotate config like this (/etc/logrotate.d/newlogrotateconf)
/usr/share/foreman-proxy/Ansible/CIS/audit/Assessor-CLI-4.0.2/logs/assessor-cli.log {
missingok
notifempty
compress
rotate 5
}
Then you'll run entr in a loop on the directory to tie logrotate into inotify/epoll,
echo -n /usr/share/foreman-proxy/Ansible/CIS/audit/Assessor-CLI-4.0.2/logs/
| ./entr -dnc logrotate --force /etc/logrotate.d/newlogrotateconf
| Is it possible to automatically zip newly created files in a directory |
1,462,966,642,000 |
under SQL directory we have only the tmp folder (tmp folder usage 59G)
is it possible to compress the folder tmp without to leave the original tmp folder ? , so the compression will work on the original folder
the folder usage:
root@serverE1:/var/backup/SQL # du -sh *
59G tmp
so after compression I will see only this : (8G is only example)
8G tmp.tar.gz
|
There are two problems to solve:
how to remove the files without interfering with your output, and
where to put the output while it is being created.
If you happen to not have any dot-files in /var/backup/SQL, it is simple:
just create your output named with a leading ".",
add to the tar-file using the --remove-files option, and
rename the output to tmp.tar.gz when done.
Something like
cd /var/backup/SQL
tar cfz .tmp.tar.gz --remove-files * && mv .tmp.tar.gz tmp.tar.gz
If you do have dot-files, then you could construct a list of the files to be tar'd and then use that list in constructing the tar-file. Using Linux, you could use the -T (--files-from) option to read this list, e.g.,
cd /var/backup/SQL
find . -type f >/tmp/list
tar czf tmp.tar.gz --remove-files --files-from /tmp/list
(Someone's sure to suggest process substitution rather than a temporary file, but that has the drawback of limited size, which may be a problem).
| how to compress a folder without to leave the original folder and without to remove the original folder |
1,462,966,642,000 |
I am attempting to create a Custom Action in Thunar (File Manager) that will extract a gzip archive into a subdirectory of the same name (e.g. abc.tar.gz to abc/). I created this command, which works, although it puts single quotes around the file name (e.g. 'abc'/ instead of abc/). I ran the equivalent command manually and it doesn't contain single quotes. how can i remove them, and where are they coming from? Is there a better method of doing this?
tar -xzvf %n -C "$(f="%n"; g=${f%%.tar.gz}; mkdir -p $g; echo $g)"
|
I would try removing the quotation marks around %n. It appears that thunar puts its own marks there, which is why you have them in the folder name.
Also, when you check thunar's examples, they never put the marks around expanded variables.
| Thunar custom action: Extraction to subdirectories |
1,462,966,642,000 |
I tried to compress a directory using tar -zcvf output.gz input_dir. I suppose I should have used a .tgz extension for compressing.
How do I untar output.gz? I tried both tar zxvf output.gz, which didn't yield anything, and gunzip output.gz, which resulted in a corrupted archive.
Is there a proper way of extracting output.gz?
|
tar -zcvf output.gz input_dir
Wrong!
tar -zcvf output.tar.gz input_dir
or
tar -zcvf output.tgz input_dir
Good.
Then estract with
tar -xvf output.tgz
or
tar -xvf output.tar.gz
You can omit z on modern distro,if doesn't work or using old distro use z switch
| Extract files from a corrupted .gz file |
1,462,966,642,000 |
I have a large tar file but it could not be downloaded completely as the browser crashed when verifying the download. Is it possible to extract some files from this tar?
I am able to view the files using tar -tf abc.tar and this shows the directories and folders
a/
a/b/
a/b/1
a/b/2
However if I use tar -zxvf abc.tar a/b/1 it gives
tar: Error opening archive: Failed to open abc.tar
Is there any way to only extract the available files in the tar in such a case?
|
The archive isn’t compressed, so you need to drop the -z flag:
tar -xvf abc.tar a/b/1
However
Failed to open abc.tar
suggests that the tarball itself is no longer there.
| How to extract specific file from partially downloaded tar? |
1,462,966,642,000 |
Objective: The efficient backup and transfer of ~/dircontaininginnumeraltinyfiles/ to /mnt/server/backups/dir.tgz. My first thought was of course rsync and gzip. But I'm open to other routes as well.
(At the end of the desired backup process, /mnt/server/backups/dir.tgz would be a full backup i.e. containing all of the files, not just ones changed since last backup. In a nutshell, I'm simply looking for a solution that's more efficient on the compression and transfer steps than tar -cvzf /mnt/server/backups/dir.tgz ~/localdir/.)
The local creation of any files is undesirable (e.g. a local .tgz backup and subsequent sync to the server) and instead have anything local just in memory e.g. via piping.
To clarify, the reason I don't want to simply rsync the dir to the local network server is because the source directory contains innumerable, highly compressible, tiny files. So for backup purposes a single, acutely overall smaller .tgz file is quite attractive.
That said, the significant majority of the files are unchanged per backup, so a simple tar -cvzf /destination/blah.tgz ~/sourcedir/ is rather inefficient, hence the desire for a smart, delta-only sort of process re the compression aspect.
While the amount of data isn't overbearing for 1Gb local network, some only have a 100Mb connection, hence the desire for a smart, delta-only sort of process for the transfer aspect as well would be desirable.
As a side note, one aspect I'm right now doing homework on is tar's --listed-incremental option and gzip's --rsyncable option.
|
Using tar -cvzf /mnt/server/backups/dir.tgz ~/localdir/ for the time being. Respectfully stepping away from this thread. Thanks for the input. Best regards to all.
| Rsyncing directory to a backup gzip file |
1,547,153,754,000 |
Recently I've changed logs configuration for letsencrypt, because there was no given one and I have files:
letsencrypt.log
letsencrypt.log.1
letsencrypt.log.10
letsencrypt.log.10.gz
letsencrypt.log.11
letsencrypt.log.11.gz
letsencrypt.log.12
letsencrypt.log.12.gz
letsencrypt.log.13
letsencrypt.log.13.gz
letsencrypt.log.14
letsencrypt.log.14.gz
letsencrypt.log.15
letsencrypt.log.15.gz
letsencrypt.log.16
letsencrypt.log.16.gz
where even files have 1409 bytes and odd files have 0 bytes. Gzipped files however have some content (which differs). The configuration for log rotate is:
/var/log/letsencrypt/*.log {
daily
rotate 32
compress
delaycompress
missingok
notifempty
create 644 root root
}
How should I change the log rotate configuration to leave only:
first two files not gzipped,
rest of the files gzipped,
get rid of empty files?
|
Ok, so I've managed to make proper log rotate:
/var/log/letsencrypt/*.log {
weekly
rotate 9
compress
delaycompress
missingok
create 644 root root
}
So the difference is that I've removed notifempty.
| Logs gzipped and not gzipped |
1,547,153,754,000 |
I am experiencing problems trying to zcat the contents of a particular gzip archive containing SQL text. The problem seems localised to one particular file on one server.
I have copied around 10 gzipped SQL files from our backup server using rsync to a new replication server that I am trying to restore them onto. In all but one case this has worked fine, simply piping the files using zcat into MySQL.
However, one file will not work. Attempting to perform any kind of read operation on the file all produce an error, "Operation not Permitted"
I can delete, chmod and chown the file and have ensured that I have full ownership and permissions on it. It's visible attributes appear to be identical to all of the other files that worked. I am also able to rename it and move it into different directories on the same disk. Attempting to copy the file, read it in any way, or move it to another disk however all generate the "Operation not permitted" Error. I have also tried to look at the attributes using lsattr, but this also generates the same error.
I can read the file on the original source server, and have also FTP'd it to my windows PC where it can also be read and extracted. I have even copied it from the original server via FTP to my PC and then back to the destination server via FTP and as soon as it hits the destination server I am unable to read it again.
My OS is CentOS 7 and The disk in question is a 100G LVM volume formatted in ext4. I have run fsck against it and it reports as clean.
Sadly the extracted SQL data file is too large to fit on the server along with the database that it is intended to create, and is effectively too large to stream the extraction over the network from another server.
Does anyone have any idea what might cause this behaviour? I am at a loss.
Thanks in advance.
|
Having re-visited this issue recently, I eventually figured out that in this case the problem was due to the McAffee EPO agent which had been installed on the server by our infrastructure team without my knowledge. This agent was blocking access to particular large files on the system including the one in question.
| Unable to read or unzip certain archives on one specific server |
1,547,153,754,000 |
In Linux Ubuntu about the 'tar' command for these versions:
tar -tzf /path/to/filename.tar.gz # Show the content
tar -xzf /path/to/filename.tar.gz # Extract the content
Observe both commands use the z option, and well, they work as expected.
Through man tar, about the z option, it indicates:
-z, --gzip, --gunzip, --ungzip
Filter the archive through gzip(1).
Question
Why tar command uses gzip command through 'z' option?
Extra Question
About the Filter the archive through gzip(1). part.
Why is need it "filter" in the two commands shown above? or What is the meaning or context of filter?
|
Archiving and compression are two separate things.
Most archiving programs on Windows (e.g. zip, 7z, rar, and many more) combine the two into one program that does both archiving and compression - so people who are used to using Windows tend to think of them as being just one inseparable thing.
While many of these programs exist on unix/linux, largely for compatibility with non-unix systens, it is far more common for the compressing and archiving functionality to be done by separate programs. Unlike MS-DOS/Windows archivers, unix-native programs understand and make use of unix file metadata like ownership and permissions, and some even handle ACLs correctly.
tar is an archiving program. It allows one or more files to be stored in a .tar archive. This archive is not compressed. It was originally used for writing a stream containing multiple files and associated metadata (filenames, ownership, perms, etc) to tape. Or to a file, as any stream of bytes can be redirected to a file or piped to another program. tar is not the only archiving program around, there are many others including cpio, ar, afio, pax, and more.
gzip is a compression program. It can compress any single file to a compressed version of itself. Or it can compress data from stdin and output it to stdout (i.e. it can work as a "filter"). Again, gzip is not the only compression/decompression program around, it is one of many.
tar can use gzip to compress a .tar archive before it is written to disk. And to decompress a compressed archive before reading from it.
Depending on what version of tar you have, it may be able to use other compression programs instead of, or as well as, gzip. For example, GNU tar has the following compression-related options:
Compression options
-a, --auto-compress Use archive suffix to determine the compression program.
-I, --use-compress-program=COMMAND Filter data through COMMAND. It must accept the -d option, for decompression. The argument can contain command line options.
-j, --bzip2 Filter the archive through bzip2(1).
-J, --xz Filter the archive through xz(1).
--lzip Filter the archive through lzip(1).
--lzma Filter the archive through lzma(1).
--lzop Filter the archive through lzop(1).
--no-auto-compress Do not use archive suffix to determine the compression program.
-z, --gzip, --gunzip, --ungzip Filter the archive through gzip(1).
-Z, --compress, --uncompress Filter the archive through compress(1).
--zstd Filter the archive through zstd(1).
And, worth noting, the other archiving programs can also be used with compression programs - either through command-line options like -z or -Z, etc; or by piping the output of the archiver into a compression program before redirecting the compressor's output to a file (or, conversely, piping the output of a decompressing program into an archiver to list or extract its contents)
You can "mix-and-match" the archiving and compression programs as needed, allowing you to take advantage of improvements in archiving and/or compression technology.
Most archivers, including GNU tar, support this via pipes, but GNU tar also has several built-in options for some well-known programs AND a convenient -I option for using other compression programs that don't have their own built-in option - perhaps implementing a new compression algorithm or a new implementation of an existing algorithm. For example, programs like pigz, pixz, pbzip2 etc (instead of gzip, xz, bzip2, etc) which are parallelised versions of those compression programs which can take advantage of multi-core/multi-thread CPUs to greatly reduce the time needed to compress or decompress the data.
A "filter" is a generic term for a program used in a pipeline to process (and possibly modify in some way) the output of one program before either redirecting it to a file or piping it to the next program in the pipeline.
Some programs (like tar with -z etc) can set up the filtering pipeline themselves, without requiring the user to do it in the shell (e.g. tar xfz filename.tar.gz is basically the same as gzip -d filename.tar.gz | tar xf -, and tar cfz filename.tar.gz ... is essentially the same as tar cf - ... | gzip > filename.tar.gz)
Many unix programs are written so that they can be used as filters in a pipeline - e.g. gzip can compress either an existing file or it can compress its input stream (stdin) and send the output to stdout....and a simple program like cat can just pass its stdin directly to stdout or optionally number the lines (with -n), make end-of-line and control and other codes visible (with options like -v, -E, -A, -t).
BTW, because pipelines are so useful, it's very common for people to write their own scripts (in awk or perl or whatever) so that they are capable of taking their input from stdin and writing to stdout - i.e. it's common for people to write their own filters.
| Why tar command uses gzip command through 'z' option? |
1,547,153,754,000 |
I need to find files newer than x days, and then turn it into a gzip, but I want to do it using pigz.
For now I'm now doing it the slow way; this works:
find /path/to/src -type f -mtime -90 | xargs tar -zcf archive.tar.gz
But pigz is tremendously faster, so I want to run this gzip using pigz instead. I tried this but it isn't working:
find /path/to/src -type f -mtime -90 | xargs tar -zcf | pigz > archive.tar.gz
It returns an error because I just guessed what to do (and tried a couple ways):
tar (child): /path/to/src: Cannot open: Is a directory
tar (child): Error is not recoverable: exiting now
How to take the first line that works and pipe that into pigz?
|
With GNU tar on any shell that supports process substitution (e.g. bash, ksh, zsh):
tar cf archive.tar.gz -I pigz --null -T <(find /path/to/src -type f -mtime -90 -print0)
This uses pigz to do the compression, and takes the (NUL-separated) list of files to include in the archive from the output of find ... -print0, via the -T or --files-from=FILE option and process substitution.
Alternatively, if you are using a minimalist POSIX-features-only shell (e.g. ash or dash, or bash running as /bin/sh or with --posix or set -o posix or with the POSIXLY_CORRECT environment variable set) you can pipe a NUL-separated list of filenames into GNU tar. The - following the -T option tells tar to read the file list from stdin.
find /path/to/src -type f -mtime -90 -print0 | tar cf archive.tar.gz -I pigz --null -T -
Either of these work with any valid filename, even those containing spaces, newlines and shell metacharacters. It also avoids the problem of too-many-filenames mentioned by @Kusalananda in his comment.
BTW, you may want to investigate using pixz instead of pigz. It does xz compression (which generally does much better compression than gzip, but is slower), and pixz will add an index to speed up extraction of specific files if it detects tar-like input. BTW, both pixz and xz-utils are packaged for most common Linux distribtions so should be easy to install.
| Pipe find list of files into xargs gzip and pipe again into pigz |
1,547,153,754,000 |
If I make a .tar.gz via
tar czvf - ./myfiles/ | pigz -9 -p 16 > ./mybackup.tar.gz,
Can I safely unzip an already gzip'd file ./myfiles/an_old_backup.tar.gz within the ./myfiles directory via
gzip -d mybackup.tar.gz
tar -xvf mybackup.tar
cd myfiles
gzip -d an_old_backup.tar.gz
tar -xvf an_old_backup.tar
? And can one do this recursive compression safely ad infinitum?
|
If your question can be rephrased as "is it OK to have compressed
archives within compressed archives?", then the answer is "yes".
This may not be the most convenient (as you note, you will have to run
tar several files to get everything unpacked), and applying
compression to data that has already been compressed may not yield an
additional reduction in size, but it will all work.
| Is it safe to do recursive compression with tar, gzip and pigz? |
1,547,153,754,000 |
I would like to make a ZIP file using zip command. Zip file would contain large number of directories containing large numbers of subdirectories, files etc.
Is there any way to calculate final size of ZIP file, but before doing actual zipping on the disk?
For example: Calculating final ZIP file size to determine if ZIP will fit into specific size container (we cannot perform on disk operation even in tmp, because if it will not fit, then zipping will fail without giving info on archive size)
|
From the zip man page:
Streaming input and output. zip will also accept a single dash ("-") as the zip file name, in which case it will write the zip file to standard output, allowing the output to be piped to another program.
So:
$ zip -r - foo | wc -c
Will tell you the compressed size in bytes of the directory foo.
7z can not write to stdout for zip files.
The other alternative is to create a memory disk and compress to it.
| Linux determine ZIP size without compression |
1,547,153,754,000 |
I had a file (was a text file at first), did xxd -r to it and saved it to tempfile2. Afterwards did file tempfile2 and it wrote:
tempfile2: gzip compressed data, was "data2.bin", from Unix, last modified: Fri Nov 14 10:32:20 2014, max compression
I tried:
gzip -d tempfile2>tempfile3
gzip -d tempfile2.gz > tempfile3
gzip -d tempfile2.gz > tempfile3.gz
gunzip tempfile2
gunzip tempfile2.gz
gunzip tempfile2 > tempfile3
... all combinations possible.
neither of that worked. It either said no such file in directory or unknown suffix -- ignored
|
You don't have a tempfile2.gz you only have a tempfile2.
Decompress by doing
gzip -d < tempfile2 > tempfile3
Normally gzip would expect a .gz file for decompression and so you can do
mv tempfile2 tempfile2.gz
gzip -d tempfile2.gz
which would give you an uncompressed tempfile2. Or you could do
mv tempfile2 tempfile2.gz
gzip -cd tempfile2.gz > tempfile3
the -c making sure the output is written to standard output. Or do
zcat tempfile2 > tempfile3
so you don't need to provide any options, for which picking the correct ones seem the source of your problems.
| have file tempfile2.gz (checked via file command) but gunzip/gzip -d does not work |
1,547,153,754,000 |
I'm looking for a way to create a .tar.gz file from a directory but I have some doubts. I'm know that the command for create the file is:
tar -zcvf /path/to/compressed/file.tar.gz /path/to/compress
And then I will get a file.tar.gz but this keep the path to the original sources. So this is what I'm looking for:
Should be compress as .tar.gz
If it's possible shouldn't keep the path to the compressed directory I mean I only needs the compress directory and its content .If I uncompress the file under Windows for example I'll get /path[folder]/to[folder]/compress[folder+content] and I just want the latest
If it's possible can I ommit some directories? Example I like to compress app/ folder and its content is:
drwxrwxr-x 6 ubuntu ubuntu 4096 Feb 24 14:48 ./
drwxr-xr-x 10 ubuntu ubuntu 4096 Feb 26 22:33 ../
-rw-rw-r-- 1 ubuntu ubuntu 141 Jan 29 06:07 AppCache.php
-rw-rw-r-- 1 ubuntu ubuntu 2040 Feb 24 14:48 AppKernel.php
-rw-rw-r-- 1 ubuntu ubuntu 267 Jan 29 06:07 autoload.php
-rw-r--r-- 1 root root 94101 Feb 19 21:09 bootstrap.php.cache
drwxrwxrwx 4 ubuntu ubuntu 4096 Feb 25 16:44 cache/
-rw-rw-r-- 1 ubuntu ubuntu 3958 Feb 24 14:48 check.php
drwxrwxr-x 2 ubuntu ubuntu 4096 Feb 24 14:48 config/
-rwxrwxr-x 1 ubuntu ubuntu 867 Jan 29 06:07 console*
-rw-rw-r-- 1 ubuntu ubuntu 6148 Jan 29 06:07 .DS_Store
-rw-rw-r-- 1 ubuntu ubuntu 143 Jan 29 06:07 .htaccess
drwxrwxrwx 2 ubuntu ubuntu 4096 Feb 24 14:48 logs/
-rw-rw-r-- 1 ubuntu ubuntu 1118 Jan 29 06:07 phpunit.xml.dist
drwxrwxr-x 5 ubuntu ubuntu 4096 Jan 29 06:07 Resources/
-rw-rw-r-- 1 ubuntu ubuntu 30404 Feb 24 14:48 SymfonyRequirements.php
But I want to leave out cache/, logs/ directories and bootstrap.php.cache file, how? Is that possible?
I need to append the current date (DD/MM/YYYY-H:M:S) to the file name, how?
Can I get some advise or help on this? I'm planning to add this to a Bash script so it will works as a bash script and not as a command line
Update: testing inside the script
Following @Ariel suggestion I have added this line to a bash script:
read -e -p "Enter the directory to compress: " -i "/var/www/html/" directory
read -e -p "Enter the filename: " filename
FILENAME = "$filename-`date +%d-%m-%Y-%X`.tgz"
cd "$directory"
tar -zcvf /home/$FILENAME --exclude cache --exclude logs --exclude bootstrap.php.cache --exclude composer.lock --exclude vendor
But I'm getting this error:
Enter the directory to compress: /var/www/html/lynxoft/apps/checkengine/
Enter the filename: checkengine
/usr/local/bin/script-task: line 109: FILENAME: command not found
tar: Cowardly refusing to create an empty archive
Why FILENAME is treat as a command and not as a var as docs says?
Update 2: still compressing the whole path
I have fixed some issues thanks to users comments and lines on the script looks likes this:
read -e -p "Enter the directory to compress: " -i "/var/www/html/" directory
read -e -p "Enter the filename: " filename
FILENAME="$filename"-$(date +%d-%m-%Y-%T).tgz
cd "$directory/.."
tar -zcvf /home/$FILENAME "$directory" --exclude="*cache*" --exclude=logs --exclude=bootstrap.php.cache --exclude=composer.lock --exclude=vendor --exclude=.git
The only remaining issue is that compressed file still having the whole path and not just the end directory. For example:
Enter the directory to compress: /var/www/html/lynxoft/apps/checkengine/
Enter the filename: file
That result in file-28-02-2015-17:44:32.tgz but content inside the compressed file still having the whole path /var/www/html/lynxoft/apps/checkengine/, why?
|
Just cd to the folder from where you want the tarball tress structure to start, and use relative paths!
So for instance:
# Note, one folder above
cd /path/to/compress/..
# Tell tar to compress the relative path
tar -zcvf /path/to/compressed/file.tar.gz compress
When you uncompress it, it will create the compress folder in the current working directory.
You can exclude files or folders using the --exclude <PATTERN> option, and add a date in the filename with date +FORMAT, for instance:
FILENAME="file-`date +%d-%m-%Y-%T`.tgz"
tar -zcvf /path/to/compressed/$FILENAME --exclude cache --exclude logs --exclude bootstrap.php.cache compress
| Gzip file but excluding some directories|files and also append current date |
1,547,153,754,000 |
I am trying to use tar to recursively compress all files with the .lammpstrj extension within the directory tree starting at the directory whose path is stored in the variable home. home contains the script containing my tar commands and 57 subdirectories, each containing a pair of sub-subdirectories named Soft_Pushoff and Equilibrium_NVT. Each Soft_Pushoff or Equilibrium_NVT directory contains one .lammpstrj file. The loop I am using to get this task done is:
for index in $(seq 1 57)
do
cd $home/$index/Soft_Pushoff/
file=`find ./ -mindepth 1 -maxdepth 1 -name "*.lammpstrj" -print`
tar cvf - ./$file | gzip -9 - > $file.tar.gz
cd $home/$index/Equilibration_NVT/
file=`find ./ -mindepth 1 -maxdepth 1 -name "*.lammpstrj" -print`
tar cvf - ./$file | gzip -9 - > $file.tar.gz
done
As it sweeps one of the 57 subdirectories of home, this section of the code usually prints:
././equilibration_nvt.lammpstrj
././soft_pushoff.lammpstrj
to the terminal. However, in 3 different instances, this is what this section of the code prints out:
././equilibration_nvt.lammpstrj
././soft_pushoff.lammpstrj
./
./time.txt
./soft_pushoff.restart.10000
./equilibration_nvt.lmp
./.tar.gz
tar: ./.tar.gz: file changed as we read it
./equilibration_nvt_pitzer.sh
./eps.txt
././soft_pushoff.lammpstrj
././equilibration_nvt.lammpstrj
None of the files "flagged" by tar is supposed to be operated on by the tar commands I am using, so I am confused as to why they are listed alongside the warning message tar: ./.tar.gz: file changed as we read it? Also, none of these files is actually changing as tar is operating on the .lammpstrj files. What could explain this warning message and, most importantly, can I trust that none of the .lammpstrj.tar.gz files written by my tar commands is corrupt, especially the ones associated with this warning message?
If this is relevant, my script is being run on a remote server. The .lammpstrj files I am trying to compress are up to 15.2 Gb in size. It takes about 2.5 days for my script to run on this remote server.
|
You're honestly overcomplicating things – use your shell to get a list of matching files, and go through these. And let's get rid of all the anachronistic things there – process substitution is usually done using $(…) instead of … these days (both safer to write and nestable!), seq begin end is superfluous when you have {begin..end}, and any tar I'm aware of has been able to apply gzip compression itself for… as long as I've been using computers, I think. So, your whole script reduces to: (replacing gzip --best with zstd -15 (15 is a very high compression setting for zstd) after our discussion above where you said compression ratio is important to you)
for file in ${home}/{1..57}/{Soft_Pushoff,Equilibration_NVT}/.lammpstrj; do
zstd -15 "${file}"
done
You of course get better compression if you compress all files into one archive, as I suspect there's parts of them that are similar, and hence compress very well if you put them in the same archive. It's also even easier; no for loop necessary
cd home
tar -cvf - ${home}/{1..57}/{Soft_Pushoff,Equilibration_NVT}/.lammpstrj | zstd -15 -o lampstrj-archive.tar.zst
| tar: ./.tar.gz: file changed as we read it | "Flagged" files are unrelated to the file tar is supposed to operate on |
1,547,153,754,000 |
I would like to know how to compress several files according to the pattern using gzip, tar, etc... that is, if I have these files:
server_regTCA.log.2021.02.12
server_regTCA.log.2021.02.13
server_regTCA.log.2021.02.14
server_regTCA.log.2021.02.15
server_regTCA.log.2021.02.16
server_regTCA.log.2021.02.17
server_regTCA.log.2021.02.18
I would like to do something like gzip -9 server_regTCA.log.2021.02.[12-15]
Thanks
|
You haven't mentioned what shell you're using, and that's what controls expansions from patterns.
If you're using bash you can use a brace expansion:
gzip server_regTCA.log.2021.02.{12..15}
If you're using a simpler shell such as sh that doesn't have brace expansions, you're limited to matching with single character patterns:
gzip server_regTCA.log.2021.02.1[2-5]
In both cases you can prefix the command with echo to see how the pattern is expanded by the shell without actually executing the gzip.
If you're looking to compress "older" files, there may be better ways to do it. For example, this will compress all (uncompressed) files matching the shape of your listed filenames that were last changed over a year (366 days) ago:
find ... -type f -mtime +365 -name 'server_regTCA.log.????.??.??' -exec gzip {} +
(Each ? matches a single character. Compressed files have a .gz suffix that won't be matched by the pattern.)
| Compress several files according to the pattern |
1,547,153,754,000 |
A tar.gz file belonging to root, can be gunzipped by a user, since it is readable by group and public. However after gunzipping, the tar file owner is user, not root anymore. Is it a feature of the program gunzip? or is there another mechanism?
|
On most modern operating systems, including Linux, there is no operation to delete a file. There is an operation to remove a file from a directory (called "unlink"), but it's an operation on the directory, not the file. So if you can modify a directory, you can remove files from it or add files (new ones, or existing ones that you can access) to it.
Deletion of a file is done automatically by the file system when it is no longer in use. A single file can even be added to more than one directory, in which case it cannot be deleted until it is removed from all such directories and is no longer in use by any processes.
| Why gunzipping a 644 file belonging to root is possible by user and gunzipped file belongs to user |
1,547,153,754,000 |
I would like to backup all files older than 90 days and greater plus gzip them. I could execute:
find /path/to/files -type f -mtime 90 -exec gzip "{}" \;
Problem with this command is it includes files 90 days old and not older ones. So it will gzip June's files but not May's. Thanks!
|
-mtime +90 should do the trick.
| Backing up files [duplicate] |
1,547,153,754,000 |
Disclaimer: Yes, finding files in a script with ls is bad, but find can't sort by modification date.
Using ls and xargs with echo everything is fine:
$ ls -t1 ssl-access*.gz | xargs -n 1 echo
ssl-access.log.2.gz
ssl-access.log.3.gz
ssl-access.log.4.gz
[...]
Changing echo to zcat:
$ ls -t1 ssl-access*.gz | xargs -n 1 zcat
gzip: ssl-access.log.2.gz.gz: No such file or directory
gzip: ssl-access.log.3.gz.gz: No such file or directory
gzip: ssl-access.log.4.gz.gz: No such file or directory
[...]
Duplicate file suffixes?! What is going on here?
UPDATE:
OS is Debian 5.
zcat is a shell script at /bin/zcat:
#!/bin/sh
PATH=${GZIP_BINDIR-'/bin'}:$PATH
exec gzip -cd "$@"
|
I found this problem on my system caused by color ls. In my .bash_profile, I had this:
alias ls="ls --color"
I found the result by sending it to stat, which printed something handy:
$ ls local4.notice-201207* | xargs -n1 -P4 -I{} stat {}
stat: cannot stat `\033[0mlocal4.notice-20120711.gz\033[0m': No such file or directory
Look at those null color codes! It was confusing zcat, which attempted to add a .gz suffix to find a file. The problem was easily solved by changing the ls to color=auto, which disables color output when STDOUT is glued to another process instead of a terminal
alias ls="ls --color=auto"
Good luck!
| Combination of ls, xargs and zcat leads to duplicate file name suffixes? |
1,547,153,754,000 |
I can recall my professor saying it means "Tar-able Gzip." I'm searching it in Google and couldn't find it. I know what Tar and Gzip are. I also know what a TGZ is, but I want to know the meaning of the acronym. I'm really curious whether it is correct or not.
Thanks!
|
tgz is often used as a file name suffix of tar archives that have been compressed using gzip, possibly by compressing it at creation time using tar -cz.
It is just a contraction of tar.gz, which would be the suffix you would get if you first created a tar archive and then compressed it with gzip.
On filesystems that enforces the old "8.3" naming rules of DOS, using tgz as the file name suffix would enable one to store these files too.
If it "means" anything, it means "gzip-compressed tar archive". Just remember that in Unix, file name suffixes do not determine the format of the contents (it's just a help to the user). It used to be that many web browsers, for example, would download and decompress files while retaining the original name and suffix.
| What does TGZ mean? |
1,547,153,754,000 |
I've issued a command gzip -r project.zip project/* in projects home directory and I've messed things up; every file in project directory and all other subdirectories have .gz extension. How do I undo this operation, i.e., how do I remove .gz extension from script, so I do not need to rename every file by hand?
|
Alternatively, you could go into said directory and use:
$ gunzip *.gz
For future reference, if you want to zip an entire directory, use
tar -zcvf project.tar.gz project/
| Wrong zip command messed up my project directory |
1,547,153,754,000 |
I have to make a bash script that do a gzip of a file if is older than 60 days, and move it in a subdir which name is the beginning of the filename. Here an example of the files I have to work with:
-rw-r--r-- 1 X X 0 2012-10-15 11:19 glux21-x1.csv
-rw-r--r-- 1 X X 0 2012-10-15 11:19 GLUX21-x34.csv
-rw-r--r-- 1 X X 0 2012-10-15 11:19 glux226.csv
-rw-r--r-- 1 X X 0 2012-10-15 11:19 glux228.csv
-rw-r--r-- 1 X X 0 2012-10-15 11:19 glux230.csv
-rw-r--r-- 1 X X 0 2012-10-15 11:19 glux232.csv
-rw-r--r-- 1 X X 0 2012-10-15 11:19 glux234.csv
-rw-r--r-- 1 X X 0 2012-10-15 11:19 glux236.csv
-rw-r--r-- 1 X X 0 2012-10-15 11:19 glux255.csv
So, for example, the glux21-x1.csv should be gzipped and moved in the glux21 subdir, as for the GLUX21-x34.csv file. The glux255.csv should go in the glux255 subdir.
|
find . -ctime -60 -maxdepth 1 -type f | while IFS= read x
do
gzip -9 "$x" # compress it
D=${x%%.csv}
D=${D/-*/} # remove suffix and everything after the -
mkdir -p "$D" # create dest sub folder
mv $x.gz "$D" # move it
done
This will process all files you needed, and put them into different sub folder respectly.
| bash script to gzip files |
1,547,153,754,000 |
I tried downloading a zipped folder from Arxiv (https://arxiv.org/format/math/0606086 under DVI)
but it downloads as div.gz. I understand that this is the TeX output. I tried TeX and various unzip apps, but none of them worked. I even tried renaming it just in case there was a mistake. Any suggestions? If this is not the right place, feel free to delete this.
|
The file command can be used to identify file format based on the contents of the file.
I clicked the "download DVI" button in Firefox and got a file named 0606086 with no extension.
$ file 0606086
0606086: TeX DVI file (TeX output 2021.09.21:0203\213)
I then ran dvipdf on it and got a readable PDF document as a result:
$ dvipdf 0606086
$ ls -l 0606086*
-rw-r--r-- 1 username username 81088 Sep 21 07:31 0606086
-rw-r--r-- 1 username username 177281 Sep 21 07:32 0606086.pdf
The okular viewer on my KDE desktop environment could also display the file directly, without explicitly converting it to PDF.
The download URL reported by Firefox was https://arxiv.org/dvi/math/0606086? so I decided to take a look at the HTTP headers reported by the site:
$ curl --head https://arxiv.org/dvi/math/0606086?
HTTP/1.1 200 OK
Date: Tue, 21 Sep 2021 04:39:33 GMT
Server: Apache
Strict-Transport-Security: max-age=31536000
Set-Cookie: browser=89.27.98.38.1632199174381535; path=/; max-age=946080000; domain=.arxiv.org
Last-Modified: Tue, 21 Sep 2021 02:03:27 GMT
ETag: "16c691a8-5cb5-5cc77cdb51712"
Accept-Ranges: bytes
Content-Length: 23733
Content-Type: application/x-dvi
Content-Encoding: x-gzip
Content-Type: application/x-dvi matches the actual content, and Content-Encoding: x-gzip indicates the document is delivered compressed with gzip. Looks like my Firefox decompressed it automatically for me, perhaps because I had gunzip available, or perhaps Firefox has built-in support for this compression?
| File with div.gz extension (mistake?) |
1,547,153,754,000 |
I am working on a script to backup and gzip a set of MySQL tables, scp/ssh them to a different server and then unpack them
current script is:
#!/bin/bash
DATE=`date +"%d_%b_%Y_%H%M"`
BACKUP_DIR=/mnt/data/backups/saturday/
echo Creating new directory /mnt/data/backups/saturday/fathom_$DATE
sudo mkdir /mnt/data/backups/saturday/fathom_$DATE
sudo chmod 777 /mnt/data/backups/saturday/fathom_$DATE
mysqldump pacific_fathom user_types > /mnt/data/backups/saturday/fathom_$DATE/user_types.sql
echo Dumping users ...
mysqldump pacific_fathom users > /mnt/data/backups/saturday/fathom_$DATE/users.sql
echo Dumping users_roles ...
mysqldump pacific_fathom users_roles > /mnt/data/backups/saturday/fathom_$DATE/users_roles.sql
tar -zvcpf $BACKUP_DIR/PlatformDB-$DATE.tar.gz /mnt/data/backups/saturday/fathom_* | ssh [email protected] 'tar -xzf - -C /mnt/data/backups/saturday'
echo Finished!
the backup works and it will zip the files but it tells me it errors with
gzip: stdin: not in gzip format
tar: Child returned status 1
tar: Error is not recoverable: exiting now
The output file it creates is PlatformDB-22_Jul_2021_1553.tar.gz
It never moves to the ssh part (checked last in the remote server and there are no logins there) so I am a bit confused as to why I am get this error. The tar.gz file it creates can be unpacked using the tar -xzf - -C /mnt/data/backups/saturday part of the script. I do have to replace the - with the file name. I do not think that would make a difference in the script though. Also that would not work in an automated environment. Any help would be GREATLY appreaciated!!!
|
The command tar -zvcpf ...$DATE.tar.gz /mnt/data/backups/saturday/fathom_* writes the gzipped backup to $DATE.tar.gz and outputs the list of files that are backed up.
This file list is then piped into ssh [email protected] 'tar -xzf - .... Obviously, the file list is not in gzipped format, which causes the error.
Solution: Send the content of $DATE.tar.gz to the ssh command. For example first create the local backup file, then cat $DATE.tar.gz | ssh .... Or, if you don't need the local backup file, create the backup on standard output:
tar -zvcpf - /mnt/data/backups/saturday/fathom_* | ssh ...
You can also try the tee command and write the backup to both standard output and a local file:
tar -zvcpf - /mnt/data/backups/saturday/fathom_* | tee $DATE.tar.gz | ssh ...
As a side remark not really relevant to this question, I just noticed that tar writes the file list to stdout when the archive is written to a file, and to stderr when the archive is written to stdout. Smart!
| gzip, send to server, ungzip |
1,547,153,754,000 |
I am trying to take a file containing text, named MyFile, and compress it into a file named MyFileComp, within the same directory.
I have tried:
gzip MyFile | touch MyFileComp
gzip MyFile >> MyFileComp
Both commands create MyFileComp, but when I open the file it is empty. When I try to decompress the file, it says unknown suffix. I don't know what that means.
Any help much appreciated. Thanks
|
I think you are not compressing it right.
Use -c to send it to stdout and add the ".gz" at the end. Is that the issue?
gzip -c MyFile > MyFileComp.gz
gzip -d MyFileComp.gz
| Compress a file into another file, with Gnu/Linux command line |
1,547,153,754,000 |
The program "zip" has a -R feature which allows one to zip all files with a certain name in a directory tree: zip -r v/s zip -R
For example:
zip -R bigfile "bigfile"
Will zip all of the following:
./bigfile
./a/bigfile
./a/b/bigfile
./a/b/c/bigfile
.......
The -R feature doesn't seem to be in gzip or xz though. I've tried it, and I've also checked the man pages.
So how may I emulate this behavior in gzip and xz?
|
Combining find, tar and the compression utilities:
With gzip:
find . -type f -name bigfile | tar cfz bigfile.tgz -T -
or with xz:
find . -type f -name bigfile | tar cfJ bigfile.txz -T -
find searches recursively for all files named bigfile under the current/working directory and the resulting pathnames are supplied to tar that creates a tarball and compresses it.
These commands are suited for the example supplied in the question. Different patterns supplied to zip -R will require corresponding arguments supplied to find.
Also keep in mind that this won't work for all possible filenames; you should consider the --null option and feed tar from find -print0.
Also tar's "-T" option is not available on every systems (for instance in HP-UX).
EDIT1
Unlike zip, rar or 7-zip, for example, gzip and xz are not capable of compressing multiple files into one.
Quoting the gzip manpage:
If you wish to create a single archive file with multiple members so that members can later be extracted independently, use an archiver such as tar or zip. GNU tar supports the -z option to invoke gzip transparently. gzip is designed as a complement to tar, not as a replacement.
See How to gzip multiple files into one gz file? and How do I compress multiple files into a .xz archive?.
EDIT2
If the goal of the OP is to make a gzip file for each file it finds that satisfies the search criteria the following command should be issued:
find . -type f -name bigfile | gzip > bigfile.gz
For a xz file:
find . -type f -name bigfile | xz > bigfile.xz
It will create a compressed file in the same directory for each file that satisfies the search criteria leaving the original file "untouched".
EDIT3
As suggested by @Kusalananda, if in bash you first do:
shopt -s globstar
and then issue the command:
tar -c -zf bigfile.tgz ./**/bigfile
a single archive file will be created with the multiple files found in subdirectories that satisfies the search criteria.
If the goal is to create one compressed file for each file found in subdirectories that satisfies the search criteria, after issuing the shopt command, you can just issue:
gzip ./**/bigfile
| How may I emulate the -R feature of "zip", but in gzip and xz? |
1,547,153,754,000 |
I'm trying to run find . -name "binaries.tgz.*.gz" -exec gzip -d -k < {} \; (ultimately I'm trying to run find . -name "binaries.tgz.*.gz" -type f -exec gzip -d -k < {} \; | tar tzf - but I'm trying to figure out why the command before the pipe isn't working first), but I get the following error:
-bash: {}: No such file or directory:
[user@host:/data/dionaea]$ find . -name "binaries.tgz.*.gz" -type f -exec gzip -d -k < {} \;
-bash: {}: No such file or directory
[user@host:/data/dionaea]$ ls | grep binaries
binaries
binaries.tgz
binaries.tgz.10.gz
binaries.tgz.11.gz
binaries.tgz.12.gz
binaries.tgz.13.gz
binaries.tgz.14.gz
binaries.tgz.15.gz
binaries.tgz.16.gz
binaries.tgz.17
binaries.tgz.17.gz
binaries.tgz.18.gz
binaries.tgz.19.gz
binaries.tgz.1.gz
binaries.tgz.20.gz
binaries.tgz.21.gz
binaries.tgz.22.gz
binaries.tgz.23.gz
binaries.tgz.24.gz
binaries.tgz.25.gz
binaries.tgz.26.gz
binaries.tgz.27.gz
binaries.tgz.28.gz
binaries.tgz.29.gz
binaries.tgz.2.gz
binaries.tgz.30.gz
binaries.tgz.3.gz
binaries.tgz.4.gz
binaries.tgz.5.gz
binaries.tgz.6.gz
binaries.tgz.7.gz
binaries.tgz.8.gz
binaries.tgz.9.gz
What am I doing wrong?
|
In your command
find . -name "binaries.tgz.*.gz" -exec gzip -d -k < {} \;
the < {} is interpreted by the shell before running find.
Use
find . -name "binaries.tgz.*.gz" -exec gzip -d -k {} \;
to extract all files and keep the original ones.
You can try
find . -name "binaries.tgz.*.gz" -type f -exec gzip -d -c {} \; | tar tzf -
to extract to stdout, but you cannot be sure that the files will get processed by find in the correct order.
| "-bash: {}: No such file or directory" using find exec [duplicate] |
1,547,153,754,000 |
I use
sudo tar xvzf forwarder.tar -C /opt/
on Linux and this works perfectly then I proceed with a
. /splunk start
to install a forwarder file.
While in AIX, its forwarder is in .gz format and I tried using
gzip -d filename.gz
And it just decompresses the gz file. Tried also using
sudo tar -xvf -C "/directory/path/" "/home/forwarder.gz"
and its not working too. Tried using .tar also on the command above but to no avail, got no result. How to install on AIX?
|
Though we cannot be sure of what your file contains,
as @JeffSchaller suggested, try piping the output of gzip into tar as follows:
gzip -d <filename.gz |
sudo tar -xvf - -C "/directory/path/"
If this doesnt provide your wanted file, then use the file command on the data to see what type it is. Eg: file filename.gz, and if it is a compressed file then gzip -d <filename.gz | file -.
| filename.gz installation on AIX |
1,547,153,754,000 |
I have a gzipped file (around 3Kb), and I want its size to be exactly 4000.
To achieve this goal, is it safe to add trailing padding with zeroes? (By safe I mean that the content of the gzipped file can be gunzipped without errors.)
dd if=/dev/zero of=/myfile bs=1 count=4000
dd if=/gzipfile.gz of=/myfile
If this is not safe, are there alternatives?
|
From the man page:
CAVEATS
When writing compressed data to a tape, it is generally
necessary to pad the output with zeroes up to a block
boundary. When the data is read and the whole block is
passed to gun‐ zip for decompression, gunzip detects that
there is extra trailing garbage after the compressed data
and emits a warning by default. You have to use the
--quiet option to suppress the warning.
So it would seem you're safe.
Note though that your code doesn't work, as you'd need to pass
conv=notrunc on the second dd invocation.
Alternatively, you can do:
dd bs=4000 seek=1 count=0 of=file.gz
or
truncate -s 4000 file.gz
To make it 4000 bytes large (without actually writing zeros, just make it sparse).
| How to safely enlarge a gzipped file? |
1,547,153,754,000 |
I'm trying to dump a huge database and compress the dump in order to not have to wait hours till it's done.
I dump the database the following way:
pg_dump -Fc -U -v | gzip > db$(date +%d-%m-%y_%H-%M).tar.gz
This leaves me with a compressed tar file.
I know want to unzip it in order to have a .tar file only:
tar -xvf xxx.tar.gz
This leaves me with an error message saying This does not look like a tar archive file
My goal is to then import it via psql.
I do not see what I am doing wrong – according to the Postgres documentation on dumps, I can use -Fc to dump in any wanted format? Thank you
|
This leaves me with a compressed tar file
No. You're using -Fc, which gives you a "custom" file format specific to pg_dump and pg_restore. That's not a tar, so you're not compressing a tar file with your gzip call.
Furthermore, the pg_dump documentation points out:
Output a custom-format archive suitable for input into pg_restore. Together with the directory output format, this is the most flexible output format in that it allows manual selection and reordering of archived items during restore. This format is also compressed by default.
Your gzip tries to compress something that's already compressed. That's not gonna do much, aside from wasting time.
As a matter of fact, under --compress=0..9, the same documentation tells us:
Specify the compression level to use. Zero means no compression. For the custom and directory archive formats, this specifies compression of individual table-data segments, and the default is to compress at a moderate level. For plain text output, setting a nonzero compression level causes the entire output file to be compressed, as though it had been fed through gzip; but the default is not to compress. The tar archive format currently does not support compression at all.
So, it uses gzip already! Can't reduce the size of something that's already gzip'ed using gzip.
What you could do instead is using
pg_dump -Fc -Z0 -U -v | zstd -5 > db$(date +%d-%m-%y_%H-%M).custom.zst
# ^ ^ ^ ^
# | | | \----- zstd compression level 5:
# | | | better than gzip --best,
# | | | but much, much faster
# | | \-------- use the zstd compressor
# | \-------------------- don't compress yourself
# \--------------------- custom format
Because, honestly, gzip is very obsolete. It's slow, doesn't scale well, and the compression ratios are terrible. There's many nicer alternatives, but zstd allows for a wide range of speed/compression ratio tradeoffs, and is very actively maintained and available for all platforms.
Warning: Slight ranting below!
Note that you can, on the compressing side, use a higher compression setting than -5; but the higher you go, the slower compression gets. It really depends on your time vs space tradeoff whether you want to try -18, I often go with -11 for zstd, which for typical data is a bout two thirds as fast as gzip --best, but tends to produce files that are 10% smaller. zstd's range of compression vs speed tradeoff (-1 to -18, or up to -22 if you really have too much free CPU time and care about 0.1% better compression ratio) is much more finely grained than gzip's, and more useful on modern machines, where zlib (which underlies gzip) is limited to a 32 kB sized window. Because, who got more than 64 kB of RAM? Everyone. In 2022, even my baking oven has more than 64 kB of RAM. So, zstd doesn't try to use incredibly small dictionary-building windows. That's one of the simpler reasons why its compression can work better than that of zlib/gzip.
| Unzip compressed dump and import via psql |
1,547,153,754,000 |
I am following a guide to learn terminal commands and have encountered the use of hyphens to refer to stdin in the following command used to find all files named 'file-A' within the directory playground. I understand it's usage to refer to the files found by the find command which are of course piped to tar and set to be read as a file using the --files-from= option. I do not understand it's usage in defining the intermediary name of the archive before being renamed by the gzip command.
find ./playground -name 'file-A' | tar -cf - --files-from=- | gzip playground.tgz
Would the value of stdin not be a list of paths defined in the stdin file? if so how can this be an acceptable name for the archive?
Thanks
|
I can't find it in the tar manual. However - is often used by tools as a pseudo-filename referring to stdin or stdout. tar is using if for both. -f - saying output to stdout and --files-from=- saying get a list of file-names from stdin. The in and out are implied, these options expect an output and input respectively.
Also: gzip is not renaming. And there are no temporary files, data is passed via pipes.
| What would the value of hyphen actually look like in this instance used to refer to stdin for this archive command? |
1,553,510,077,000 |
I run into an error while extracting a tar file, the created directory are created with the chmod 666 instead of 777. Therefore it will not extract inside this folder.
Here is my command:
$umask 000 && tar -xvzf compress.tgz
tar: dist/assets: Cannot mkdir: Permission denied
tar: dist/assets/favicon.ico: Cannot open: Permission denied
$ls -ll
drw-rw-rw- 2 user grp 4096 Mar 14 16:43 assets
I used this module on local to compress the file:
https://www.npmjs.com/package/tar
When I create a directory with mkdir it gives 777 mode, what am I missing?
As requested:
-bash-4.2$ tar tzvf compress.tgz
drw-rw-rw- 0/0 0 2018-03-15 12:17 dist/
-rw-rw-rw- 0/0 13117 2018-03-15 12:17 dist/3rdpartylicenses.txt
drw-rw-rw- 0/0 0 2018-03-15 12:17 dist/assets/
I use --strip 1 to extract.
|
As you can see from the output of tar tv the permissions in the archive itself are broken. If you have any control over the tool that created this archive I would strongly recommend that you fix it, or report a bug.
I assume you still need to extract the files from the broken archive. Try this:
tar xzvf compress.tgz --delay-directory-restore
find dist -type d -exec chmod a+x {} \;
(We can't use a trailing + in this instance because the chmod must be applied one directory at time so that find can descend into the fixed subdirectories. The semicolon is prefixed with a backslash so that it's not treated by the shell as a special character, but rather it's passed to the find... -exec as a literal.)
| tar command create directory without 777 permission |
1,553,510,077,000 |
I am having a two linux server set up.
ServerA has /apps/data.
ServerB has mounted on /data ServerAs path /apps/data with NFS.
How load of various operations is handled? by meaning:
Q:
By initiating gzip for example (could be cp as second example) on ServerB the I/O is handled by ServerA or ServerB
How disks perform? meaning ServerB gets the file to his disk to perform the gzip then place it back to NFS and it sync over the network? (IO increase of both servers + network traffic)
|
You have a directory shared over NFS from ServerA and mounted by ServerB.
If you perform a file operation in that directory on ServerB, there will be no disk I/O happening on ServerB, but there will be network I/O between the servers, and ServerA will eventually perform the actual disk operations (by instructions from the NFS daemon).
The file you are accessing will not be transferred between the servers as if synchronized with rsync or scp, but chunks of the file will be transferred over NFS and handed directly to the process that reads the data, as it reads it, or to the server for writing to disk as needed. This is happening using the NFS protocol as described by RFC 1094 (NFSv2) or RFC 1813 (NFSv3).
Again, there will be no disk operations on ServerB, unless it needs swap or if whatever it is you're doing allocates space in a non-NFS-mounted directory (e.g. /tmp).
In fact, ServerB might very well exist without any physical disk connected at all. This is called a "diskless system" and used to be popular in the computer labs where I first encountered Unix in the early 90's (on Sun SPARCstation IPCs that had a local /tmp but everything else mounted via NFS).
Working over NFS will be slower than working against a local disk due to the network I/O, but for day-to-day command line work it hardly matters much unless you regularly handle large amounts of data on file.
| Disk performance between servers in NFS directory |
1,553,510,077,000 |
I frequently use the commands like
find . ! -iname "*test*" -o -iname "*foo*" | xargs zgrep -ie first.*last -e last.*first
I use zgrep because it can grep through .gz files, and if the files aren't gzipped it simply uses grep. However I frequently get
gzip: copy.txt.gz: No such file or directory
logs that clutter the output of my searches. Is there any way to mute these gzip logs?
|
You can redirect the command standard error output to the null device.
find . ! -iname "*test*" -o -iname "*foo*" | xargs zgrep -ie first.*last -e last.*first 2>/dev/null
| Mute gzip errors/warnings when using zgrep |
1,553,510,077,000 |
I try to gzip a file abc.log which has size of 111 bytes, but after gzip, the size of the file increased to 125 bytes, why is that? Is it when i perform gzip, it will create header and trailer that has certain size?
Command used:
gzip -5 abc.log
|
Not just gzip, but attempting to compress a file which is already as small as possible can increase the size (because each method for compression has some overhead in the form of header, tables, etc). This is also referred to as negative compression.
Further reading
What's the most that GZIP or DEFLATE can increase a file size?
Why GZip compression increases the file size for some extension?
Optimizing encoding and transfer size of text-based assets
| Linux Gzip increasing size |
1,553,510,077,000 |
I have a .tar.gz as input and want to extract the first 128 MiB of it and output as a .tar.gz in a single command. I tried:
sudo tar xzOf input.tar.gz | sudo dd of=output bs=1M count=128 iflag=fullblock | sudo tar -cfz final.tar.gz -T -
which is obviously not working. How can I achieve this.
|
Instead of trying to extract the archive’s contents (which can’t work here — there’s no way to track the individual files’ metadata), decompress it, truncate it and recompress it. If you have a version of head capable of this:
gzip -dc input.tar.gz | head -c128M | gzip -c > final.tar.gz
or you can use dd as in your command:
gzip -dc input.tar.gz | dd bs=1M count=128 iflag=fullblock | gzip -c > final.tar.gz
| Extract first n bytes from .tar.gz and output as a .tar.gz in a single command |
1,553,510,077,000 |
On Linux Mint 20.2 Cinnamon I would like to create a disk image of my secondary disk drive (SATA) containing Windows 10, not that it matters now, directly gzip'ed using the Parallel gzip = pigz onto NTFS formatted external HDD (compressed on-the-fly).
My problem is inside the resulting compressed file, there is somehow twisted (wrong) size of the contents, which I would like you to have a look at:
1TB drive uncompressed disk shows only 3.8GB whereas its compressed size is 193 GB.
$ gzip --list sata-disk--windows10--2021-Sep-24.img.gz
compressed uncompressed ratio uncompressed_name
206222131640 3772473344 -5366.5% sata-disk--windows10--2021-Sep-24.img
-rwxrwxrwx 1 vlastimil vlastimil 193G 2021-Sep-24 sata-disk--windows10--2021-Sep-24.img.gz
Notes to the below shell snippet I just ran
Serial number censored, of course (ABCDEFGHIJKLMNO)
I tried to force the size with --size of pv command
The exact byte size of the whole disk comes from smartctl -i /dev/sdX
The shell snippet I just ran follows
dev=/dev/disk/by-id/ata-Samsung_SSD_870_QVO_1TB_ABCDEFGHIJKLMNO; \
file=/media/vlastimil/4TB_Seagate_NTFS/Backups/sata-disk--windows10--"$(date +%Y-%b-%d)".img.gz; \
pv --size 1000204886016 < "$dev" | pigz -9 > "$file"
I am quite sure the problem is in how I used the pipe or pv for that matter, but I fail to prove it. Test scenario with a regular file (~2GB) works just fine and as expected. Can this be an error in gzip maybe...?
What am I doing wrong here, please? Thank you in advance.
Perhaps the last thing to cover is versions of pv and pigz:
I am using a packaged version of pv: 1.6.6-1
I am using a compiled version of pigz: 2.6
|
I may just have found an answer to this oddity.
As the gzip manual page says:
Bugs: The gzip format represents the input size modulo 2^32, so the --list option reports incorrect uncompressed sizes and compression ratios for uncompressed files 4 GB and larger.
It further states:
To work around this problem, you can use the following command to discover a large uncompressed file's true size: zcat file.gz | wc -c.
On a personal note: That command which could find out the real size, is probably useless on very large files like 1TB is as I cannot really imagine where it would uncompress such file in the first place. Secondly, it would take ages. And even if space is not an issue, then on SSDs there is the wear problem, etc.
It is clear gzip is actually causing the problem. And it will not go away. In effect, it will cause the decompression progress to be impossible to watch. (Without feeding pv the size, of course.)
So is there any viable solution?
Sadly, I have found nothing so far. I just tried the Parallel bzip2 (from Ubuntu focal universe directly), and it also reports invalid file size (202 GB this time). I need it relatively fast done, so these were my candidates. If I do not find any other fast alternative, I will stick with gzip for it is the fastest.
Example with start / finish in color :)
# UPDATED on 2021-sep-25 03:00 AM
# SATA disk backup using Parallel `gzip` = `pigz` (compiled version 2.6)
tput bold; tput setaf 2; printf '%s' 'Start : '; date; printf '\n'; tput sgr0; \
gz_date=$(date +%Y-%b-%d | tr '[:upper:]' '[:lower:]'); \
gz_disk=/dev/disk/by-id/ata-Samsung_SSD_870_QVO_1TB_ABCDEFGHIJKLMNO; \
gz_file=/media/vlastimil/4TB_Seagate_Ext4/Backups/sata-disk--windows10--"$gz_date".img.gz; \
pv --size 1000204886016 < "$gz_disk" | pigz -9 > "$gz_file"; \
printf '\n'; tput bold; tput setaf 2; printf '%s' 'Finish: '; date; tput sgr0;
A list of my compressor candidates vs their speed can be found here. But if it ever disappears, here is a screenshot (click to enlarge):
| 1TB drive compressed shows only 3.8GB, what did I do wrong? |
1,553,510,077,000 |
I want to trim the path of the gunzipped tarball so that some "spurious" leading directories are excluded. Let me explain with an example.
I have the following directory structure, as outputted by the tree command:
tree /tmp/gzip-expt
/tmp/gzip-expt
├── gunzip-dir
├── gzip-dir
└── repo
└── src-tree
├── l1file.txt
└── sub-dir
└── l2file.txt
5 directories, 2 files
I want to gzip up src-tree in gzip-dir so this is what I do:
cd /tmp/gzip-expt/gzip-dir
tar -czvf file.tar.gz /tmp/gzip-expt/repo/src-tree
Subsequently I gunzip file.tar.gz in gunzip-dir so this is what I do:
cd /tmp/gzip-expt/gunzip-dir
tar -xzvf /tmp/gzip-expt/gzip-dir/file.tar.gz
tree /tmp/gzip-expt/gunzip-dir shows the following output:
/tmp/gzip-expt/gunzip-dir
└── tmp
└── gzip-expt
└── repo
└── src-tree
├── l1file.txt
└── sub-dir
└── l2file.txt
However, I would like tree /tmp/gzip-expt/gunzip-dir to show the following output:
/tmp/gzip-expt/gunzip-dir
└── src-tree
├── l1file.txt
└── sub-dir
└── l2file.txt
In other words, I don't want to see the "spurious" tmp/gzip-expt/repo part of the path.
|
They're not spurious, it just stores exactly what it was told to store.
In particular, given the path /tmp/gzip-expt/repo/src-tree, it can't know which parts of the path should be kept, if the files should be stored as /tmp/gzip-expt/repo/src-tree/l1file.txt, or src-tree/l1file.txt or l1file.txt etc. It makes a difference when extracting the tarball, if the archive has the leading directory, it's created when extracting. If not, it's not.
Give tar a relative path, and have it run in the correct directory for the relative paths to work correctly:
cd /tmp/gzip-expt/repo
tar -czvf /tmp/gzip-expt/gzip-dir/file.tar.gz ./src-tree
or
cd /tmp/gzip-expt/gzip-dir
tar -C /tmp/gzip-expt/repo -czvf file.tar.gz ./src-tree
The GNU man page describes -C as:
-C, --directory=DIR
Change to DIR before performing any operations. This option is order-sensitive, i.e. it affects all options that follow.
but as far as I can see, the file given by -f is still used from the directory tar was started in, even if -C is given first.
If you want to fix it on extraction, at least GNU tar has --strip-components=N which tells it to drop leading parts of the filenames. E.g. with --strip-components=2, a filename like /tmp/gzip-expt/repo/src-tree/... would be extracted as repo/src-tree/....
| Getting rid of spurious directories with tar -xzvf, while gunzipping |
1,553,510,077,000 |
I have extracted a file 'linux' from ubutu installation .iso file.
The command file linux gives me this output.
linux: gzip compressed data, was
"vmlinuz-5.4.0-42-generic.efi.signed", last modified: Sat Jul 11
08:53:21 2020, max compression, from Unix, original size modulo 2^32
30118272
To uncompress it, I tried gzip -d linux but it gives me gzip: linux: unknown suffix -- ignored as output. How can I extract or uncompress this file?
|
gunzip (or gzip -d) tries to infer the output filename by removing the "dot suffix" from the input file name. Since filename linux doesn't have a suffix, if doesn't know what to name the output.
If your version of gzip supports the -N option you can try using that to restore the extracted file's original name:
-N --name
When compressing, always save the original file name and time
stamp; this is the default. When decompressing, restore the
original file name and time stamp if present. This option is
useful on systems which have a limit on file name length or when
the time stamp has been lost after a file transfer.
Otherwise probably the simplest thing is to rename the input file (to linux.gz for example). Alternatively, decompress to standard output and redirect to a filename of your choosing: gzip -dc linux > vmlinuz-5.4.0-42-generic.efi.signed
| can't uncompress a gzip compressed data |
1,553,510,077,000 |
I had 100 images from a Raspberry Pi project and in order to transfer them to another computer, I used the "Compress" interaction after selecting all of the images and using the right click context menu. The resulting file (.gz) has the correct size, but there is only one image inside (and of correct size), which even after extraction with multiple tools (GUI, 7z, unzip etc) is shown to be a single image of ~300 Mb (which I can open and display).
What is happening here?
|
The problem arises from the misuse of the file format, which is not intended for multiple files, as roaima clarified. The gzip manpage states:
If you wish to create a single archive file with multiple members so
that members can later be extracted independently, use an archiver
such as tar or zip. GNU tar supports the -z option to invoke gzip
transparently. gzip is designed as a complement to tar, not as a
replacement
| Compressing multiple files to .gz misbehaving |
1,553,510,077,000 |
I created the compressed tar file using the below command.
find /app/jboss -not -name "*.err" -not -name "*.log" | cpio -o | gzip >/app/patchbkp/test/REDHAT_jboss-eap-7.2_18-Aug-2020.tar.gz
I got the compressed file here
[user1@myhost test]$ ls -ltr /app/patchbkp/test/REDHAT_jboss-eap-7.2_18-Aug-2020.tar.gz
-rwxrwxr-x 1 user1 mygrp 363997224 Aug 18 16:08 /app/patchbkp/test/REDHAT_jboss-eap-7.2_18-Aug-2020.tar.gz
I want to extract the /app/jboss directory under /app/patchbkp/test so i get /app/patchbkp/test/app/jboss/.......; thus i try the below command.
cd /app/patchbkp/test/
zcat /app/patchbkp/test/REDHAT_jboss-eap-7.2_18-Aug-2020.tar.gz | cpio -i
But this does not generate any files under /app/patchbkp/test
I get permission denied error when i try gunzip /app/patchbkp/test/REDHAT_jboss-eap-7.2_18-Aug-2020.tar.gz
Can you please guide me as to how can I extract the tar compressed /app/jboss under /app/patchbkp/test directory?
|
The first command creates a compressed cpio-format archive with absolute filenames. This means that when you extract the files, they will be placed in those absolute places
Note that cpio -o writes a cpio-format archive, not a tar-format one. You should use cpio -o -H tar for a tar-format file.
Your extract command will work, but only by writing the files to the absolute locations on the filesystem. You can see what would happen by first listing the file with the -t flag
zcat /app/patchbkp/test/REDHAT_jboss-eap-7.2_18-Aug-2020.tar.gz | cpio -it
I would instead recommend these variants of your commands
( cd / && find app/jboss -not -name "*.err" -not -name "*.log" | cpio -o -H tar ) | gzip >/app/patchbkp/test/REDHAT_jboss-eap-7.2_18-Aug-2020.tar.gz
or even use GNU tar directly if you have it
tar -C / -f /app/patchbkp/test/REDHAT_jboss-eap-7.2_18-Aug-2020.tar.gz -cv --exclude '*.log' --exclude '*.err' app/jboss
| How to extract a compressed tar using cpio to a particular directory location |
1,553,510,077,000 |
I am using RHEL. How can I find files in the directory /path/to/directory/ which contain the string SomeText if those files are .gz archives?
|
You can use zgrep just like you would use grep:
zgrep SomeText /path/to/directory/*
Or, to make it recursive (if you have Zutils), including hidden files:
zgrep -R SomeText /path/to/directory/
Or, to make it recursive (without Zutils):
find /path/to/directory/ -type -f -exec zgrep SomeText {} +
| How to find files in the directory “/path/to/directory/” which contains information "SomeText" if they are in archives .gz? |
1,553,510,077,000 |
I want to back up each of the directories in /home, in separate .tar.gz files, to the /backup directory. Here an example. I know how to compress them. But not sure, how to send it in another directory.
/home/ab123456, /home/ertoto, /home/mange
/backup/ab123456.tar.gz, /backup/ertoto.tar.gz, /backup/mange.tar.gz
|
You can loop over the contents of /home
cd /home
for file in */; do tar czf /backup/"$file".tar.gz "$file";done
| gzip multiples directory into archives in another directory |
1,553,510,077,000 |
Based on this page: http://code.google.com/speed/page-speed/docs/payload.html#GzipCompression
I need to enable compression on my website.
|
You should start googling words 'apache compression'. First link in SERP will lead you to http://httpd.apache.org/docs/2.0/mod/mod_deflate.html
| Configure webserver for compression |
1,553,510,077,000 |
I created some pigz (parallel gzip) - home page - compressed archives of my SSD disk drives. (compiled version 2.8)
I called one of them 4TB-SATA-disk--Windows10--2024-Jan-21.img.gz which says the size of the drive and the OS installed on it. Alternatively, the size is of course smaller in TiB which may be comfortably shown by fdisk and tools like it (Disk /dev/sda: 3.64 TiB).
I knew from the past, that listing the compressed file cannot show me the real size of contents. It will show 2.2GB or similar nonsense, even in Archive Manager for GNOME. It likely has something to do with gzip structure limitations.
However I had some doubts about the real size of the contents, therefore my question states how may I verify it?
|
When I thought about it, I realized that without uncompressing the archive, this likely will not be possible, so I came up with the following:
$ cat 4TB-SATA-disk--Windows10--2024-Jan-21.img.gz | unpigz -p8 | pv >/dev/null
3.64TiB 1:56:53 [ 543MiB/s] [ 543MiB/s] [ <=> ]
While I still hope there is some other way, I post my solution for the record.
| How to verify size of pigz (parallel gzip) archive contents? [duplicate] |
1,553,510,077,000 |
I have an application that has many thousands of files totaling over 10TB.
I'd need to backup this data somewhere (probably to AWS S3).
I'd like to:
compress data being backed up
save the backup as a single file
For example as a gzipped tarfile.
Because of the size, I cannot create the gzipped tarfile locally because it'd be too large.
How can I:
Stream all of these folders and files onto AWS S3 as a single compressed file?
Stream the compressed file from S3 back onto my disk to the original filesystem layout?
|
This is a basic piping and ssh use case.
$ tar zcf - -C /path/to/your/files . | ssh S3_hostname 'cat > yourfile.tar.gz'
To decompress:
$ ssh S3_hostname 'cat yourfile.tar.gz' | tar zxf - -C /path/to/extract/to
The key here is telling tar that it should write to or read from stdout/stdin instead of a file on the local filesystem. In the case of tar creating the archive, the stdout from tar is fielded by ssh which pipes it to the remote invocation of cat running on the S3 host, where the output gets written to file yourfile.tar.gz. In the decompression scenario, ssh is again used to invoke cat on the remote host to read the file, and that stream becomes the stdin stream for a local invocation of tar which extracts the archive to the path specified in the -C argument.
| How to backup many large files to single compressed file on S3 |
1,553,510,077,000 |
I have a script with a while loop that will print a text. I want to save it into files with custom name.
Script:
#!/bin/bash
while true
do
echo "Press [CTRL+C] to stop.."
done
I can run split:
./loopscript.sh | split -dl 10000 --additional-suffix=.txt
Output:
x001.txt
x002.txt
x003.txt
x004.txt
x005.txt
But I want to save it like below:
myoutput.001.gz
myoutput.002.gz
myoutput.003.gz
myoutput.004.gz
myoutput.005.gz
|
For the custom name (the prefix) you can just add it at the end as argument. For filtering all output files through gzip you can use the --filter=COMMAND option. Also -a3 is optional if you need to define the suffix length to 3 characters (001, 002 etc) . Also note the - (to read stdin) before the output prefix argument.
./loopscript.sh | split -a3 -dl 10000 --filter='gzip > $FILE.gz' - myoutput.
will produce gzipped files (of 10K lines if unzipped) named like:
myoutput.000.gz
myoutput.001.gz
myoutput.002.gz
From man split the syntax says (optional) output prefix goes to the end:
SYNOPSIS
split [OPTION]... [FILE [PREFIX]]
and --filter accepts $FILE as the filename out of the split command:
--filter=COMMAND
write to shell COMMAND; file name is $FILE
| Split file from a streaming output with custom name and gzip |
1,553,510,077,000 |
I have 2 scripts.
I'm calling my script2 as sh scriptpath/script2.sh & inside script1
which compresses files using a combination of find, xargs and gzip commands on 16 files at a time. It is basically a file watcher script integrated to run the process below after checking if a file is present. (Not cron)
Reference: https://it.toolbox.com/question/file-watcher-script-070510
find ${Filepath}/ -maxdepth 1 -type f -name "${Pattern}" -print0 | xargs -0 -t -n 1 -P 16 gzip > /dev/null
After calling my script 2, it's hanging at the command above.
Script1 session is getting closed and script2's shell is opening up with above command status. I need the gzip command of the second script to be run in the background and not the foreground.
Script1 - Generates few files. Exports variables to be used in script2
And then calls script2 as sh script2 needed parameters & (ampersand to push script2 to go into background) and script1 completes, but after script 2 finds one touch file. It begins its execution. But the prompt of script2 where gzip is executing after it finds touch file is coming to foreground.
Script2
Gunzips files created before calling script 2
fileflag=0
timer1=0
check_interval=300 # check every 5 minutes
(( check_interval_minutes=${check_interval}/60 ))
while [ ${timer1} -lt 180 ]
do
if [ -f /path/to/my/file ]
then
find ${Filepath}/ -maxdepth 1 -type f -name "${Pattern}" -print0 | xargs -0 -t -n 1 -P 16 gzip > /dev/null
else
sleep ${check_interval}
fi
(( timer1=${timer1} ${check_interval_minutes} ))
done
|
xargs -t writes to stderr. Your > /dev/null doesn't affect stderr. So you are writing to the terminal from a background process which usually is a bad idea.
| Another called script not going into background even given & |
1,553,510,077,000 |
I have a directory with thousands of files .gz and I would like to uncompress and save the uncompressed files in a specific directory.
I have tried but I can get it (beginner in this field).
Thanks
|
Try something like:
mkdir destination
cd destination
for g in ../origin/*.gz; do # Each *.gz file in origin...
gzcat $g > ${g##../origin/} # ... gets uncompressed to here
done
With thousands of files, the glob (../origin/*.gz) might choke... and the destination directory might also get very slow.
| unix uncompress multiple gz and save in particular directory |
1,333,655,309,000 |
In Linux I can create a SHA1 password hash using sha1pass mypassword. Is there a similar command line tool which lets me create sha512 hashes? Same question for Bcrypt and PBKDF2.
|
Yes, you're looking for mkpasswd, which (at least on Debian) is part of the whois package. Don't ask why...
anthony@Zia:~$ mkpasswd -m help
Available methods:
des standard 56 bit DES-based crypt(3)
md5 MD5
sha-256 SHA-256
sha-512 SHA-512
Unfortunately, my version at least doesn't do bcrypt. If your C library does, it should (and the manpage gives a -R option to set the strength). -R also works on sha-512, but I'm not sure if its PBKDF-2 or not.
If you need to generate bcrypt passwords, you can do it fairly simply with the Crypt::Eksblowfish::Bcrypt Perl module.
| How to create SHA512 password hashes on command line |
1,333,655,309,000 |
I tried to use sha256sum in High Sierra; I attempted to install it with MacPorts, as:
sudo port install sha256sum
It did not work.
What to do?
|
After investigating a little, I found a ticket in an unrelated software in GitHub sha256sum command is missing in MacOSX , with several solutions:
installing coreutils
sudo port install coreutils
It installs sha256sum at /opt/local/libexec/gnubin/sha256sum
As another possible solution, using openssl:
function sha256sum() { openssl sha256 "$@" | awk '{print $2}'; }
As yet another one, using the shasumcommand native to MacOS:
function sha256sum() { shasum -a 256 "$@" ; } && export -f sha256sum
| No sha256sum in MacOS |
1,333,655,309,000 |
sha1sum outputs a hex encoded format of the actual sha. I would like to see a base64 encoded variant. possibly some command that outputs the binary version that I can pipe, like so: echo -n "message" | <some command> | base64 or if it outputs it directly that's fine too.
|
If you have the command line utility from OpenSSL, it can produce a digest in binary form, and it can even translate to base64 (in a separate invocation).
printf %s foo | openssl dgst -binary -sha1 | openssl base64 -A
-sha256, -sha512, etc are also supported.
| How can I get a base64 encoded shaX on the cli? |
1,333,655,309,000 |
I have the md5sum of a file and I don't know where it is on my system. Is there any easy option of find to identify a file based on its md5? Or do I need to develop a small script ?
I'm working on AIX 6 without the GNU tools.
|
Using find:
find /tmp/ -type f -exec md5sum {} + | grep '^file_md5sum_to_match'
If you searching through / then you can exclude /proc and /sys see following find command example :
Also I had done some testing, find take more time and less CPU and RAM where ruby script is taking less time but more CPU and RAM
Test Result
Find
[root@dc1 ~]# time find / -type f -not -path "/proc/*" -not -path "/sys/*" -exec md5sum {} + | grep '^304a5fa2727ff9e6e101696a16cb0fc5'
304a5fa2727ff9e6e101696a16cb0fc5 /tmp/file1
real 6m20.113s
user 0m5.469s
sys 0m24.964s
Find with -prune
[root@dc1 ~]# time find / \( -path /proc -o -path /sys \) -prune -o -type f -exec md5sum {} + | grep '^304a5fa2727ff9e6e101696a16cb0fc5'
304a5fa2727ff9e6e101696a16cb0fc5 /tmp/file1
real 6m45.539s
user 0m5.758s
sys 0m25.107s
Ruby Script
[root@dc1 ~]# time ruby findm.rb
File Found at: /tmp/file1
real 1m3.065s
user 0m2.231s
sys 0m20.706s
| Find a file when you know its checksum? |
1,333,655,309,000 |
In /etc/shadow file there are encrypted password.
Encrypted password is no longer crypt(3) or md5 "type 1" format. (according to this previous answer)
Now I have a
$6$somesalt$someveryverylongencryptedpasswd
as entry.
I can no longer use
openssl passwd -1 -salt salt hello-world
$1$salt$pJUW3ztI6C1N/anHwD6MB0
to generate encrypted passwd.
Any equivalent like (non existing) .. ?
openssl passwd -6 -salt salt hello-world
|
Python:
python -c 'import crypt; print crypt.crypt("password", "$6$saltsalt$")'
(for python 3 and greater it will be print(crypt.crypt(..., ...)))
Perl:
perl -e 'print crypt("password","\$6\$saltsalt\$") . "\n"'
| /etc/shadow : how to generate $6$ 's encrypted password? [duplicate] |
1,333,655,309,000 |
I have four files that I created using an svndump
test.svn
test2.svn
test.svn.gz
test2.svn.gz
now when I run this
md5sum test2.svn test.svn test.svn.gz test2.svn.gz
Here is the output
89fc1d097345b0255825286d9b4d64c3 test2.svn
89fc1d097345b0255825286d9b4d64c3 test.svn
8284ebb8b4f860fbb3e03e63168b9c9e test.svn.gz
ab9411efcb74a466ea8e6faea5c0af9d test2.svn.gz
So I can't understand why gzip is compressing files differently is it putting a timestamp somewhere before compressing? I had a similar issue with mysqldump as it was using the date field on top
|
gzip stores some of the original file's metadata in record header, including the file modification time and filename, if available. See GZIP file format specification.
So it's expected that your two gzip files aren't identical. You can work around this by passing gzip the -n flag, which stops it from including the original filename and timestamp in the header.
| Why does the gzip version of files produce a different md5 checksum |
1,333,655,309,000 |
Under the assumption that disk I/O and free RAM is a bottleneck (while CPU time is not the limitation), does a tool exist that can calculate multiple message digests at once?
I am particularly interested in calculating the MD-5 and SHA-256 digests of large files (size in gigabytes), preferably in parallel. I have tried openssl dgst -sha256 -md5, but it only calculates the hash using one algorithm.
Pseudo-code for the expected behavior:
for each block:
for each algorithm:
hash_state[algorithm].update(block)
for each algorithm:
print algorithm, hash_state[algorithm].final_hash()
|
Check out pee ("tee standard input to pipes") from moreutils. This is basically equivalent to Marco's tee command, but a little simpler to type.
$ echo foo | pee md5sum sha256sum
d3b07384d113edec49eaa6238ad5ff00 -
b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c -
$ pee md5sum sha256sum <foo.iso
f109ffd6612e36e0fc1597eda65e9cf0 -
469a38cb785f8d47a0f85f968feff0be1d6f9398e353496ff7aa9055725bc63e -
| Simultaneously calculate multiple digests (md5, sha256)? |
1,333,655,309,000 |
Can we generate a unique id for each PC, something like uuuidgen, but it will never change unless there are hardware changes? I was thinking about merging CPUID and MACADDR and hash them to generate a consistent ID, but I have no idea how to parse them using bash script, what I know is how can I get CPUID from
dmidecode -t 4 | grep ID
and
ifconfig | grep ether
then I need to combine those hex strings and hash them using sha1 or md5 to create fixed length hex string.
How can I parse that output?
|
How about these two:
$ sudo dmidecode -t 4 | grep ID | sed 's/.*ID://;s/ //g'
52060201FBFBEBBF
$ ifconfig | grep eth1 | awk '{print $NF}' | sed 's/://g'
0126c9da2c38
You can then combine and hash them with:
$ echo $(sudo dmidecode -t 4 | grep ID | sed 's/.*ID://;s/ //g') \
$(ifconfig | grep eth1 | awk '{print $NF}' | sed 's/://g') | sha256sum
59603d5e9957c23e7099c80bf137db19144cbb24efeeadfbd090f89a5f64041f -
To remove the trailing dash, add one more pipe:
$ echo $(sudo dmidecode -t 4 | grep ID | sed 's/.*ID://;s/ //g') \
$(ifconfig | grep eth1 | awk '{print $NF}' | sed 's/://g') | sha256sum |
awk '{print $1}'
59603d5e9957c23e7099c80bf137db19144cbb24efeeadfbd090f89a5f64041f
As @mikeserv points out in his answer, the interface name can change between boots. This means that what is eth0 today might be eth1 tomorrow, so if you grep for eth0 you might get a different MAC address on different boots. My system does not behave this way so I can't really test but possible solutions are:
Grep for HWaddr in the output of ifconfig but keep all of them, not just the one corresponding to a specific NIC. For example, on my system I have:
$ ifconfig | grep HWaddr
eth1 Link encap:Ethernet HWaddr 00:24:a9:bd:2c:28
wlan0 Link encap:Ethernet HWaddr c4:16:19:4f:ac:g5
By grabbing both MAC addresses and passing them through sha256sum, you should be able to get a unique and stable name, irrespective of which NIC is called what:
$ echo $(sudo dmidecode -t 4 | grep ID | sed 's/.*ID://;s/ //g') \
$(ifconfig | grep -oP 'HWaddr \K.*' | sed 's/://g') | sha256sum |
awk '{print $1}'
662f0036cba13c2ddcf11acebf087ebe1b5e4044603d534dab60d32813adc1a5
Note that the hash is different from the ones above because I am passing both MAC addresses returned by ifconfig to sha256sum.
Create a hash based on the UUIDs of your hard drive(s) instead:
$ blkid | grep -oP 'UUID="\K[^"]+' | sha256sum | awk '{print $1}'
162296a587c45fbf807bb7e43bda08f84c56651737243eb4a1a32ae974d6d7f4
| generate consistent machine unique ID |
1,333,655,309,000 |
Does the hash of a file change if the filename or path or timestamp or permissions change?
$ echo some contents > testfile
$ shasum testfile
3a2be7b07a1a19072bf54c95a8c4a3fe0cdb35d4 testfile
|
Not as far as I can tell after a simple test.
$ echo some contents > testfile
$ shasum testfile
3a2be7b07a1a19072bf54c95a8c4a3fe0cdb35d4 testfile
$ mv testfile newfile
$ shasum newfile
3a2be7b07a1a19072bf54c95a8c4a3fe0cdb35d4 newfile
| Does the hash of a file change if the filename changes? |
1,333,655,309,000 |
I have the working password and can see the hash (/etc/passwd). How do I find the hashing algorithm used to hash the password, without manually trying different algorithms until I find a match?
|
This is documented in crypt(3)’s manpage, which you can find via shadow(5)’s manpage, or passwd(5)’s. Those links are appropriate for modern Linux-based systems; the description there is:
If salt is a character string starting with the characters "$id$"
followed by a string optionally terminated by "$", then the result
has the form:
$id$salt$encrypted
id identifies the encryption method used instead of DES and this then
determines how the rest of the password string is interpreted. The
following values of id are supported:
ID | Method
─────────────────────────────────────────────────────────
1 | MD5
2a | Blowfish (not in mainline glibc; added in some
| Linux distributions)
5 | SHA-256 (since glibc 2.7)
6 | SHA-512 (since glibc 2.7)
Blowfish, also known as bcrypt, is also identified by prefixes 2, 2b, 2x, and 2y (see PassLib’s documentation).
So if a hashed password is stored in the above format, you can find the algorithm used by looking at the id; otherwise it’s crypt’s default DES algorithm (with a 13-character hash), or “big” crypt’s DES (extended to support 128-character passwords, with hashes up to 178 characters in length), or BSDI extended DES (with a _ prefix followed by a 19-character hash).
Some distributions use libxcrypt which supports and documents quite a few more methods:
y: yescrypt
gy: gost-yescrypt
7: scrypt
sha1: sha1crypt
md5: SunMD5
Other platforms support other algorithms, so check the crypt manpage there. For example, OpenBSD’s crypt(3) only supports Blowfish, which it identifies using the id “2b”.
| How to find the hashing algorithm used to hash passwords? |
1,333,655,309,000 |
A careful examination of the /etc/passwd and /etc/shadow files reveal that the passwords stored are hashed using some form of hashing function.
A quick Google search reveals that by default, the passwords are encrypted using DES. If an entry begins with $, then it indicates that some other hashing function was used.
For example, some entries on my Ubuntu machine begin with $6$...
What do the various numbers represent?
|
The full list is in man 5 crypt (web version):
Prefix
Hashing Method
"$y$"
yescrypt
"$gy$"
gost-yescrypt
"$7$"
scrypt
"$2b$"
bcrypt
"$6$"
sha512crypt
"$5$"
sha256crypt
"$sha1"
sha1crypt
"$md5"
SunMD5
"$1$"
md5crypt
"_"
bsdicrypt (BSDI extended DES)
(empty string)
bigcrypt
(empty string)
descrypt (Traditional DES)
"$3$"
NT
(Blowfish can be either $2$ or $2a$ according to Wikipedia Crypt (Unix).)
So $6$ means SHA-512.
Which one your system uses is governed by any options passed to the pam_unix PAM module.
The default on the latest version of Ubuntu is set in /etc/pam.d/common-password:
password [success=1 default=ignore] pam_unix.so obscure sha512
which means that next time you change your password, it will be hashed using SHA-512, assuming your account is local, rather than NIS/LDAP/Kerberos, etc.
See also:
FreeBSD crypt
ArchLinux Blowfish passwords
NetBSD crypt(3)
PHP crypt function
| What methods are used to encrypt passwords in /etc/passwd and /etc/shadow? |
1,333,655,309,000 |
I have a folder with duplicate (by md5sum (md5 on a Mac)) files, and I want to have a cron job scheduled to remove any found.
However, I'm stuck on how to do this. What I have so far:
md5 -r * | sort
Which outputs something like this:
04c5d52b7acdfbecd5f3bdd8a39bf8fb gordondam_en-au11915031300_1366x768.jpg
1e88c6899920d2c192897c886e764fc2 fortbourtange_zh-cn9788197909_1366x768.jpg
266ea304b15bf4a5650f95cf385b16de nebraskasupercell_fr-fr11286079811_1366x768.jpg
324735b755c40d332213899fa545c463 grossescheidegg_en-us10868142387_1366x768.jpg
3993028fcea692328e097de50b26f540 Soyuz Spacecraft Rolled Out For Launch of One Year Crew.png
677bcd6006a305f4601bfb27699403b0 lechaustria_zh-cn7190263094_1366x768.jpg
80d03451b88ec29bff7d48f292a25ce6 ontariosunrise_en-ca10284703762_1366x768.jpg
b6d9d24531bc62d2a26244d24624c4b1 manateeday_row10617199289_1366x768.jpg
ca1486dbdb31ef6af83e5a40809ec561 Grueling Coursework.jpg
cdf26393577ac2a61b6ce85d22daed24 Star trails over Mauna Kea.jpg
dc3ad6658d8f8155c74054991910f39c smoocave_en-au10358472670_1366x768.jpg
dc3ad6658d8f8155c74054991910f39c smoocave_en-au10358472670_1366x7682.jpg
How can I process based on the MD5 of the file to remove duplicates? I don't really care which "original" I keep - but I only want to keep one.
Should I be approaching this in a different manner?
|
I'm working on Linux, which means the is the command md5sum which outputs:
> md5sum *
d41d8cd98f00b204e9800998ecf8427e file_1
d41d8cd98f00b204e9800998ecf8427e file_10
d41d8cd98f00b204e9800998ecf8427e file_2
d41d8cd98f00b204e9800998ecf8427e file_3
d41d8cd98f00b204e9800998ecf8427e file_4
d41d8cd98f00b204e9800998ecf8427e file_5
d41d8cd98f00b204e9800998ecf8427e file_6
d41d8cd98f00b204e9800998ecf8427e file_7
d41d8cd98f00b204e9800998ecf8427e file_8
d41d8cd98f00b204e9800998ecf8427e file_9
b026324c6904b2a9cb4b88d6d61c81d1 other_file_1
31d30eea8d0968d6458e0ad0027c9f80 other_file_10
26ab0db90d72e28ad0ba1e22ee510510 other_file_2
6d7fce9fee471194aa8b5b6e47267f03 other_file_3
48a24b70a0b376535542b996af517398 other_file_4
1dcca23355272056f04fe8bf20edfce0 other_file_5
9ae0ea9e3c9c6e1b9b6252c8395efdc1 other_file_6
84bc3da1b3e33a18e8d5e1bdd7a18d7a other_file_7
c30f7472766d25af1dc80b3ffc9a58c7 other_file_8
7c5aba41f53293b712fd86d08ed5b36e other_file_9
Now using awk and xargs the command would be:
md5sum * | \
sort | \
awk 'BEGIN{lasthash = ""} $1 == lasthash {print $2} {lasthash = $1}' | \
xargs rm
The awk part initializes lasthash with the empty string, which will not match any hash, and then checks for each line if the hash in lasthash is the same as the hash (first column) of the current file (second column). If it is, it prints it out. At the end of every step it will set lasthash to the hash of the current file (you could limit this to only be set if the hashes are different, but that should be a minor thing especially if you do not have many matching files). The filenames awk spits out are fed to rm with xargs, which basically calls rm with what the awk part gives us.
You probably need to filter directories before md5sum *.
Edit:
Using Marcins method you could also use this one:
comm -1 -2 \
<(ls) | \
<(md5sum * | \
sort -k1 | \
uniq -w 32 | \
awk '{print $2}' | \
sort) \
xargs rm
This substracts from the filelist optained by ls the first filename of each unique hash optained by md5sum * | sort -k1 | uniq -w 32 | awk '{print $2}'.
| How to remove duplicate files using bash |
1,333,655,309,000 |
I want to verify a file using md5sum -c file.md5. I can do that by hand, but I don't know how to check the validity in a script.
|
You can use md5sum's return status:
if md5sum -c file.md5; then
# The MD5 sum matched
else
# The MD5 sum didn't match
fi
To make things cleaner, you can add --status to tell md5sum (perhaps GNU's version only) to be silent:
if md5sum --status -c file.md5; then
# The MD5 sum matched
else
# The MD5 sum didn't match
fi
Shorter forms work just as well if appropriate:
md5sum --status -c file.md5 && echo OK
| Use md5sum to verify file in a script |
1,333,655,309,000 |
I am trying to make a script that detects if any of the files in a directory were changed within a 2 seconds interval.
What I have so far is:
#!/bin/bash
for FILE in "${PWD}/*"
do
SUM1="$(md5sum $FILE)"
sleep 2
SUM2="$(md5sum $FILE)"
if [ "$SUM1" = "$SUM2" ];
then
echo "Identical"
else
echo "Different"
fi
done
This outputs only one Time the value "Identical", I want it to check each file and output "Identical" or "Different" for each file.
Edit: Can this be done without installing the inotify-tools package?
|
As others have explained, using inotify is the better solution. I'll just explain why your script fails. First of all, no matter what language you are programming in, whenever you try to debug something, the first rule is "print all the variables":
$ ls
file1 file2 file3
$ echo $PWD
/home/terdon/foo
$ for FILE in "${PWD}/*"; do echo "$FILE"; done
/home/terdon/foo/*
So, as you can see above, $FILE is actually expanded to $PWD/*. Therefore, the loop is only run once on the string /home/terdon/foo/* and not on each of the files in the directory individually. Then, the md5sum command becomes:
md5sum /home/terdon/foo/*
In other words, it runs md5sum on all files in the target directory at once and not on each of them.
The problem is that you are quoting your glob expansion and that stops it from being expanded:
$ echo "*"
*
$ echo *
file1 file2 file3
While variables should almost always be quoted, globs shouldn't since that makes them into strings instead of globs.
What you meant to do is:
for FILE in "${PWD}"/*; do ...
However, there is no reason to use $PWD here, it's not adding anything useful. The line above is equivalent to:
for FILE in *; do
Also, avoid using CAPITAL letters for shell variables. Those are used for the system-set environmental variables and it is better to keep your own variables in lower case.
With all this in mind, here's a working, improved version of your script:
#!/bin/bash
for file in *
do
sum1="$(md5sum "$file")"
sleep 2
sum2="$(md5sum "$file")"
if [ "$sum1" = "$sum2" ];
then
echo "Identical"
else
echo "Different"
fi
done
| Bash script detecting change in files from a directory |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.