date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,380,615,765,000 |
When I do:
# gzip -c foo > foo1.gz
# gzip < foo > foo2.gz
Why does foo2.gz end up being smaller in size than foo1.gz?
|
Because it's saving the filename and timestamp so that it can try to restore both after you decompress it later. Since foo is given to gzip via <stdin> in your second example, it can't store the filename and timestamp information.
From the manpage:
-n --no-name
When compressing, do not save the original file name and time stamp by default. (The original name is always saved if the name had
to be truncated.) When decompressing, do not restore the original file name if present (remove only the gzip suffix from the com-
pressed file name) and do not restore the original time stamp if present (copy it from the compressed file). This option is the
default when decompressing.
-N --name
When compressing, always save the original file name and time stamp; this is the default. When decompressing, restore the original
file name and time stamp if present. This option is useful on systems which have a limit on file name length or when the time
stamp has been lost after a file transfer.
I've recreated the issue here:
[root@xxx601 ~]# cat /etc/fstab > file.txt
[root@xxx601 ~]# gzip < file.txt > file.txt.gz
[root@xxx601 ~]# gzip -c file.txt > file2.txt.gz
[root@xxx601 ~]# ll -h file*
-rw-r--r--. 1 root root 465 May 17 19:35 file2.txt.gz
-rw-r--r--. 1 root root 1.2K May 17 19:34 file.txt
-rw-r--r--. 1 root root 456 May 17 19:34 file.txt.gz
In my example, file.txt.gz is the equivalent of your foo2.gz. Using the -n option disables this behavior when it otherwise would have access to the information:
[root@xxx601 ~]# gzip -nc file.txt > file3.txt.gz
[root@xxx601 ~]# ll -h file*
-rw-r--r--. 1 root root 465 May 17 19:35 file2.txt.gz
-rw-r--r--. 1 root root 456 May 17 19:43 file3.txt.gz
-rw-r--r--. 1 root root 1.2K May 17 19:34 file.txt
-rw-r--r--. 1 root root 456 May 17 19:34 file.txt.gz
As you can see above, the file sizes for file.txt and file3.txt match since they're now both omitting name and date.
| Why does gzipping a file on stdin yield a smaller output than the same file given as an argument? |
1,380,615,765,000 |
I am using wget to download a static html page. The W3C Validator tells me the page is encoded in UTF-8. Yet when I cat the file after download, I get a bunch of binary nonsense. I'm on Ubuntu, and I thought the default encoding was UTF-8? That's what my locale file seems to say. Why is this happening and how can I correct it?
Also, looks like Content-Encoding: gzip. Perhaps this makes a diff?
This is the simple request:
wget https://www.example.com/page.html
I also tried this:
wget https://www.example.com/page.html -q -O - | iconv -f utf-16 -t utf-8 > output.html
Which returned: iconv: illegal input sequence at position 40
cat'ing the file returns binary that looks like this:
l�?חu�`�q"�:)s��dġ__��~i��6n)T�$H�#���QJ
Result of xxd output.html | head -20 :
00000000: 1f8b 0800 0000 0000 0003 bd56 518f db44 ...........VQ..D
00000010: 107e a6bf 62d4 8a1e 48b9 d8be 4268 9303 .~..b...H...Bh..
00000020: 8956 082a 155e 7a02 21dd cbd8 3bb6 97ae .V.*.^z.!...;...
00000030: 77cd ee38 39f7 a1bf 9d19 3bb9 0bbd 9c40 w..89.....;....@
00000040: 2088 12c5 de9d 9df9 be99 6f67 f751 9699 .........og.Q..
00000050: 500d 1d79 5eee a265 faec 7151 e4ab 6205 P..y^..e..qQ..b.
00000060: 4dd3 0014 1790 e7d0 77c0 ef2f cbf8 cde3 M.......w../....
00000070: cf1f 7d6c 7d69 ec16 d0d9 c67f 7d7d 56c9 ..}l}i......}}V.
00000080: 04c5 eb33 35fc e49e 2563 e908 ca10 0d45 ...35...%c.....E
00000090: 31ce afcf a022 e77a 34c6 fa46 46be d88f 1....".z4..FF...
000000a0: a41e ab79 446d 76d6 702b cf45 9e7f ba77 ...yDmv.p+.E...w
000000b0: 7dc2 779c 274e cc18 483c 3a12 0f75 f07c }.w.'N..H<:..u.|
000000c0: 5e63 67dd b886 ab48 e550 b5c4 f0e3 db0d ^cg....H.P......
000000d0: 54c1 85b8 8627 2ff3 2ff3 17f9 0626 d31d T....'/./....&..
000000e0: d9a6 e5b5 4076 663f 94ec 7b5a 17cf 7ade ....@vf?..{Z..z.
000000f0: 00d3 0d9f 4fcc d733 ef8d a0bb 0a06 c7eb ....O..3........
00000100: b304 6fb1 b1cc 18ed 90e0 8710 43aa 424f ..o.........C.BO
00000110: 50c7 d0c1 2bac 09be 4d1c 2566 335e 666c P...+...M.%f3^fl
00000120: 1e20 951d 58fd 6774 f3e9 f317 749f 7fc4 . ..X.gt....t...
00000130: d651 cdca f5a7 b0a5 aea4 08ab 055c e4c5 .Q...........\..
Also, strangely, the output file seems to open properly in TextWrangler!
|
This is a gzip compressed file. You can find this out by running the file command, which figures out the file format from magic numbers in the data (this is how programs such as Text Wrangler figure out that the file is compressed as well):
file output.html
wget -O - … | file -
The server (I guessed it from the content you showed) is sending gzipped data and correctly setting the header
Content-Encoding: gzip
but wget doesn't support that. In recent versions, wget sends Accept-encoding: identity, to tell the server not to compress or otherwise encode the data. In older versions, you can send the header manually:
wget --header 'Accept-encoding: identity' …
However this particular server appears to be broken: it sends compressed data even when told not to encode the data in any way. So you'll have to decompress the data manually.
wget -O output.html.gz … && gunzip output.html.gz
| Wget returning binary instead of html? |
1,380,615,765,000 |
mknod /tmp/oracle.pipe p
sqlplus / as sysdba << _EOF
set escape on
host nohup gzip -c < /tmp/oracle.pipe > /tmp/out1.gz \&
spool /tmp/oracle.pipe
select * from employee;
spool off
_EOF
rm /tmp/oracle.pip
I need to insert a trailer at the end of the zipped file out1.gz ,
I can count the lines using
count=zcat out1.gz |wc -l
How do i insert the trailer
T5 (assuming count=5)
At the end of out1.gz without unzipping it.
|
From man gzip you can read that gzipped files can simply be concatenated:
ADVANCED USAGE
Multiple compressed files can be concatenated. In this case, gunzip will extract all members at once. For example:
gzip -c file1 > foo.gz
gzip -c file2 >> foo.gz
Then
gunzip -c foo
is equivalent to
cat file1 file2
This could also be done using cat for the gzipped files, e.g.:
seq 1 4 > A && gzip A
echo 5 > B && gzip B
#now 1 to 4 is in A.gz and 5 in B.gz, we want 1 to 5 in C.gz:
cat A.gz B.gz > C.gz && zcat C.gz
1
2
3
4
5
#or for appending B.gz to A.gz:
cat B.gz >> A.gz
For doing it without external file for you line to be appended, do as follows:
echo "this is the new line" | gzip - >> original_file.gz
| How to append a line in a zipped file without unzipping? |
1,380,615,765,000 |
I packed and compressed a folder to a .tar.gz archive.
After unpacking it was nearly twice as big.
du -sh /path/to/old/folder = 263M
du -sh /path/to/extracted/folder = 420M
I searched a lot and found out that tar is actually causing this issue by adding metadata or doing other weird stuff with it.
I made a diff on 2 files inside the folder, as well as a md5sum. There is absolutely no diff and the checksum is the exact same value. Yet, one file is as twice as big as the original one.
root@server:~# du -sh /path/to/old/folder/subfolder/file.mcapm /path/to/extracted/folder/subfolder/file.mcapm
1.1M /path/to/old/folder/subfolder/file.mcapm
2.4M /path/to/extracted/folder/subfolder/file.mcapm
root@server:~# diff /path/to/old/folder/subfolder/file.mcapm /path/to/extracted/folder/subfolder/file.mcapm
root@server:~#
root@server:~# md5sum /path/to/old/folder/subfolder/file.mcapm
root@server:~# f11787a7dd9dcaa510bb63eeaad3f2ad
root@server:~# md5sum /path/to/extracted/folder/subfolder/file.mcapm
root@server:~# f11787a7dd9dcaa510bb63eeaad3f2ad
I am not searching for different methods, but for a way to reduce the size of those files again to their original size.
How can I achieve that?
|
[this answer is assuming GNU tar and GNU cp]
There is absolutely no diff and the checksum is the exact same value. Yet, one file is as twice as big as the original one.
1.1M /path/to/old/folder/subfolder/file.mcapm
2.4M /path/to/extracted/folder/subfolder/file.mcapm
That .mcapm file is probably sparse. Use the -S (--sparse) tar option when creating the archive.
Example:
$ dd if=/dev/null seek=100 of=dummy
...
$ mkdir extracted
$ tar -zcf dummy.tgz dummy
$ tar -C extracted -zxf dummy.tgz
$ du -sh dummy extracted/dummy
0 dummy
52K extracted/dummy
$ tar -S -zcf dummy.tgz dummy
$ tar -C extracted -zxf dummy.tgz
$ du -sh dummy extracted/dummy
0 dummy
0 extracted/dummy
You can also "re-sparse" a file afterwards with cp --sparse=always:
$ dd if=/dev/zero of=junk count=100
...
$ du -sh junk
52K junk
$ cp --sparse=always junk junk.sparse && mv junk.sparse junk
$ du -sh junk
0 junk
| Make grown extracted tar file small again |
1,380,615,765,000 |
I had a large (~60G) compressed file (tar.gz).
I used split to break it into 4 parts and then cat to join them back together.
However, now, when I am trying to estimate the size of the uncompressed file, it turns out it is smaller than the original? How is this possible?
$ gzip -l myfile.tar.gz
compressed uncompressed ratio uncompressed_name
60680003101 3985780736 -1422.4% myfile.tar
|
This is caused by the size of the field used to store the uncompressed size in gzipped files: it’s only 32 bits, so gzip can only store sizes of files up to 4 GiB. Anything larger is compressed and uncompressed correctly, but gzip -l gives an incorrect uncompressed size in versions 1.11 and older.
So splitting the tarball and reconstructing it hasn’t caused this, and shouldn’t have affected the file — if you want to make sure, you can check it with gzip -tv.
See Fastest way of working out uncompressed size of large GZIPPED file for more details, and the gzip manual:
The gzip format represents the input size modulo 2³², so the uncompressed size and compression ratio are listed incorrectly for uncompressed files 4 GiB and larger.
| Uncompressed file estimation wrong? |
1,331,533,062,000 |
I will be backing up a large (750GB) disk to an external USB disk using dd.
Should I be using redirection or piping? Which is more efficient? Or is there a difference?
Also, what is the best block size? USB is likely to be the bottleneck here.
dd if=/dev/sda bs=1M | gzip -c > /mnt/sdb1/backups/disk.img.gz
gzip -dc /mnt/sdb1/backups/disk.img.gz | dd of=/dev/sda bs=1M
vs
dd if=/dev/sda bs=1M | gzip -c | dd of=/mnt/sdb1/backups/disk.img.gz
dd if=/mnt/sdb1/backups/disk.img.gz | gzip -dc | dd of=/dev/sda bs=1M
Thanks.
|
You don't need to use dd or piping at all.
</dev/sda gzip >/mnt/sdb1/backups/disk.img.gz
</mnt/sdb1/backups/disk.img.gz gunzip >/dev/sda
I once made a benchmark and found using dd slower than cat for a straight copy between different disks. I would expect the pipe to make any solution involving dd even slower in this case.
| gzip - redirection or piping? |
1,331,533,062,000 |
Is there a quick way to check if a gzipped file is empty, or do I have to unzip it first?
example:
$ touch foo
$ if [ -s foo ]; then echo not empty; fi
$ gzip foo
$ if [ -s foo.gz ]; then echo not empty; fi
not empty
$ wc -l foo.gz
1 foo.gz
|
gzip -l foo.gz | awk 'NR==2 {print $2}' prints the size of the uncompressed data.
if LC_ALL=C gzip -l foo.gz | awk 'NR==2 {exit($2!=0)}'; then
echo foo is empty
else
echo foo is not empty
fi
Alternatively you can start uncompressing the data.
if [ -n "$(gunzip <foo.gz | head -c 1 | tr '\0\n' __)" ]; then
echo "foo is not empty"
else
echo "foo is empty"
fi
(If your system doesn't have head -c to extract the first byte, use head -n 1 to extract the first line instead.)
| How can I check if a gzipped file is empty? |
1,331,533,062,000 |
I know how to gunzip a file to a selected location.
But when it comes to utilizing all CPU power, many consider pigz instead of gzip. So, the question is how do I unpigz (and untar) a *.tar.gz file to a specific directory?
|
I found three solutions:
With GNU tar, using the awesome -I option:
tar -I pigz -xvf /path/to/archive.tar.gz -C /where/to/unpack/it/
With a lot of Linux piping (a "geek way"):
unpigz < /path/to/archive.tar.gz | tar -xvC /where/to/unpack/it/
More portable (to other tar implementations):
unpigz < /path/to/archive.tar.gz | (cd /where/to/unpack/it/ && tar xvf -)
(You can also replace tar xvf - with pax -r to make it POSIX-compliant, though not necessarily more portable on Linux-based systems.)
Credits go to @PSkocik for a proper direction, @Stéphane Chazelas for the 3rd variant and to the author of this answer.
| unpigz (and untar) to a specific directory |
1,331,533,062,000 |
To illustrate the point: I have downloaded the LEDA library from the company's website. Using tar -xzf on it fails:
$ tar -xzf LEDA-6.3-free-fedora-core-8-64-g++-4.1.2-mt.tar.gz
tar: This does not look like a tar archive
tar: Skipping to next header
tar: Exiting with failure status due to previous errors
However, gunzip followed by tar -xf works just fine:
$ gunzip LEDA-6.3-free-fedora-core-8-64-g++-4.1.2-mt.tar.gz
$ tar -xf LEDA-6.3-free-fedora-core-8-64-g++-4.1.2-mt.tar
# no error
Can anyone tell me why this could be?- I'd want the standard tar command to work all the time.
|
What appears to have happened is that they've double compressed the archive.
If you run file on your gunzip'd file, you'll find its still a gzip archive. And if you rename it to have .gz again, you can gunzip it again.
It seems recently gnu tar will automatically add the -z option, provided the input is a file. So, that's why it works without the -z option after you'd already run gunzip once, tar automatically added it.
This behavior is documented, from the info page:
"Reading compressed archive is even simpler: you don't need to specify
any additional options as GNU `tar' recognizes its format automatically. [...]
The format recognition algorithm is based on "signatures", a special
byte sequences in the beginning of file, that are specific for certain
compression formats."
That's from §8.1.1 "Creating and Reading Compressed Archives."
| Under what circumstances does gunzip & tar xf work but tar xzf fail? |
1,331,533,062,000 |
I am learning Linux and I was trying the gzip command. I tried it on a folder which has a hierarchy like
Personal/Folder1/file1.amr
Personal/Folder2/file2.amr
Personal/Folder3/file3.amr
Personal/Folder4/file4.amr
I ran
"gzip -r Personal"
and now its like
Personal/Folder1/file1.amr.gz
Personal/Folder2/file2.amr.gz
Personal/Folder3/file3.amr.gz
Personal/Folder4/file4.amr.gz
How do I go back?
|
You can use
gunzip -r Personal
which works the same as
gzip -d -r Personal
If gzip on your system does not have the -r option (e.g. busybox's gzip) , you can use
find Personal -name "*.gz" -type f -print0 | xargs -0 gunzip
| How to gunzip files recursively (or how to UNDO 'gzip -r') |
1,331,533,062,000 |
I have a gzip archive with trailing data. If I unpack it using gzip -d it tells me: "decompression OK, trailing garbage ignored" (same goes for gzip -t which can be used as a method of detecting that there is such data).
Now I would like to get to know this garbage, but strangely enough I couldn't find any way to extract it. gzip -l --verbose tells me that the "compressed" size of the archive is the size of the file (i.e. with the trailing data), that's wrong and not helpful. file is also of no help, so what can I do?
|
Figured out now how to get the trailing data.
I created Perl script which creates a file with the trailing data, it's heavily based on https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=604617#10:
#!/usr/bin/perl
use strict;
use warnings;
use IO::Uncompress::Gunzip qw(:all);
use IO::File;
unshift(@ARGV, '-') unless -t STDIN;
my $input_file_name = shift;
my $output_file_name = shift;
if (! defined $input_file_name) {
die <<END;
Usage:
$0 ( GZIP_FILE | - ) [OUTPUT_FILE]
... | $0 [OUTPUT_FILE]
Extracts the trailing data of a gzip archive.
Outputs to stdout if no OUTPUT_FILE is given.
- as input file file causes it to read from stdin.
Examples:
$0 archive.tgz trailing.bin
cat archive.tgz | $0
END
}
my $in = new IO::File "<$input_file_name" or die "Couldn't open gzip file.\n";
gunzip $in => "/dev/null",
TrailingData => my $trailing;
undef $in;
if (! defined $output_file_name) {
print $trailing;
} else {
open(my $fh, ">", $output_file_name) or die "Couldn't open output file.\n";
print $fh $trailing;
close $fh;
print "Output file written.\n";
}
| How to get trailing data of gzip archive? |
1,331,533,062,000 |
How can I know the raw original timestamp of a file foo compressed with gzip without having to decompress foo.gz?
gzip --verbose --list foo.gz and file foo.gz will print formatted date and time.
|
Extract the timestamp manually. Assuming that the compressed file has a single member (this is normally the case with gzip):
<foo.gz dd bs=4 skip=1 count=1 | od -t d4
This prints the raw timestamp, i.e. the number of seconds since 1970-01-01 00:00 UTC, in decimal.
| Extract timestamp from a gzip file |
1,331,533,062,000 |
I have a folder with 36,348 files gz files. I want to unzip all of them.
Running:
gunzip ./*
results in
-bash: /usr/bin/gunzip: Argument list too long
What's the easiest way to get around this?
|
Try:
find . -type f -exec gunzip {} +
This assumes that current directory only contains files that you want to unzip.
| gunzip a folder with many files |
1,331,533,062,000 |
I try to pack a .csv file with tar.gz, while being in the root directory.
The file myfile.csv is located at /mnt/sdb1/
So the full filename is /mnt/sdb1/myfile.csv
I try to save the tar.gz under /mnt/sdb1/old_files
I tried it like this:
tar -czf /mnt/sdb1/old_files/new.tar.gz mnt/sdb1/myfile.csv
But when i extract the file, then a folder with name "mnt" will be extracted which cointains another folder called "sdb1", which contains the file.
Is it possible to compress the file only, instead of copying all the directories?
|
use the --directory option from man tar :
-C,- -directory DIR
change to directory DIR
i.e.:
tar -C /mnt/sdb1/ -czf /mnt/sdb1/old_files/new.tar.gz myfile.csv
| Pack file with tar.gz from root directory |
1,331,533,062,000 |
Say I have a file of 80GB /root/bigfile on a 100GB system and want to put this file in a archive /root/bigarchive.tar
I obviously need to delete this file at the same time that it is added in the archive. Hence my question:
How to delete a file at the same time that it is added in an archive?
|
If you're using GNU tar command, you can use the --remove-files option:
--remove-files
remove files after adding them to the archive
tar -cvf files.tar --remove-files my_directory
| How to add a huge file to an archive and delete it in parallel |
1,331,533,062,000 |
I am looking for a compression tool with an arbitrarily large dictionary (and "block size"). Let me explain by way of examples.
First let us create 32MB random data and then concatenate it to itself to make a file of twice the length of length 64MB.
head -c32M /dev/urandom > test32.bin
cat test32.bin test32.bin > test64.bin
Of course test32.bin is not compressible because it is random but the first half of test64.bin is the same as the second half, so it should be compressible by roughly 50%.
First let's try some standard tools. test64.bin is of size exactly 67108864.
gzip -9. Compressed size 67119133.
bzip2 -9. Compressed size 67409123. (A really big overhead!)
xz -7. Compressed size 67112252.
xz -8. Compressed size 33561724.
zstd --ultra -22. Compressed size 33558039.
We learn from this that gzip and bzip2 can never compress this file. However with a big enough dictionary xz and zstd can compress the file and in that case zstd does the best job.
However, now try:
head -c150M /dev/urandom > test150.bin
cat test150.bin test150.bin > test300.bin
test300.bin is of size exactly 314572800. Let's try the best compression algorithms again at their highest settings.
xz -9. Compressed size 314588440
zstd --ultra -22. Compressed size 314580017
In this case neither tool can compress the file.
Is there a tool that has an arbitrarily large dictionary size so it
can compress a file such as test300.bin?
Thanks to the comment and answer it turns out both zstd and xz can do it. You need zstd version 1.4.x however.
zstd --long=28. Compressed size 157306814
xz -9 --lzma2=dict=150MiB. Compressed size 157317764.
|
It's at least available with the xz command. The xz manpage has:
The following table summarises the features of the presets:
Preset DictSize CompCPU CompMem DecMem
-0 256 KiB 0 3 MiB 1 MiB
[...]
-9 64 MiB 6 674 MiB 65 MiB
Column descriptions:
DictSize is the LZMA2 dictionary size. It is waste of memory to use a
dictionary bigger than the size of the uncompressed file. This is why
it is good to avoid using the presets -7 ... -9 when there's no real
need for them.
[...]
As documented in the Custom compressor filter chains section, you can simply supply manually the dictionary size to xz with for example --lzma2=dict=150MiB (we have insight information telling 150MiB is enough, else in doubt the file size would have to be used).
xz -9 --lzma2=dict=150MiB test300.bin
While doing this the xz process on amd64 stayed most of the time at about 1.6g usage of resident memory.
$ ls -l test*
-rw-r--r--. 1 user user 157286400 Jan 19 16:03 test150.bin
-rw-r--r--. 1 user user 157317764 Jan 19 16:03 test300.bin.xz
| Is there a compression tool with an arbitrarily large dictionary? |
1,331,533,062,000 |
I am using bzip2 to compress a file,the process takes more that 100% cpu.Is there any way to run bzip2 with minimum CPU precentage.
|
Is this process interfering with other processes on your system? Why do you want to limit the CPU bzip2 uses?
You can use the nice command to change a process's priority:
$ nice -n 19 bzip2 <file>
Additionally, you can try lowering the bzip2 compression level:
$ bzip2 -1 <file>
| How to bzip a file with minimum cpu percentage? |
1,331,533,062,000 |
I cannot use tar -tz as the solaris version I'm using does not accept the -z option.
I tried something like gunzip file.tar.gz | tar -tv but that only gives:
tar: /dev/rmt/0: No such file or directory
...and unzips the tar.gz to .tar which is undesired. I only want to look inside them without modification.
|
Don't have a Solaris box handy, but wouldn't something like zcat file.tar.gz | tar -tv do the trick? That might need to be gzcat or gunzip -c on Solaris, I'm not sure.
UPDATE: @Riccardo Murri points out that the default behavior of tar on Solaris is NOT to read from stdin (which seems very un-Unixish, but ce la vie), so zcat file.tar.gz | tar -tv -f - is probably what you need.
UPDATE 2: Finally found what looks to be a decent site for Solaris man pages, so I present to you man gunzip and man tar.
| How can i view the contents of a tar.gz file (filenames + filesize) |
1,331,533,062,000 |
I have a massive number of files with extensions like .0_1234 .0_4213 and .0_4132 etc. Some of these are gzip compressed and some are raw email. I need to determine which are compressed files, decompress those, and rename all files to a common extension once all compress files are decompressed. I've found I can use the file command to determine which are compressed, then grep the results and use sed to whittle the output down to a list of files, but can't determine how to decompress the seemingly random extensions. Here's what I have so far
file *|grep gzip| sed -e 's/: .*$//g'
I'd like to use xargs or something to take the list of files provided in output and either rename them to .gz so they can be decompressed, or simply decompress them in-line.
|
Don't use gzip, use zcat instead which doesn't expect an extension. You can do the whole thing in one go. Just try to zcat the file and, if that fails because it isn't compressed, cat it instead:
for f in *; do
( zcat "$f" || cat "$f" ) > temp &&
mv temp "$f".ext &&
rm "$f"
done
The script above will first try to zcat the file into temp and, if that fails (if the file isn't in gzip format), it will just cat it. This is run in a subshell to capture the output of whichever command runs and redirect it to a temp file (temp). Then, the temp is renamed to the original file name plus an extension (.ext in this example) and the original is deleted.
| mass decompress gzip files without gz extension |
1,331,533,062,000 |
I have some process that creates a stream of millions of highly similar lines. I'm piping this to gz. Does the compression ratio improve over time in such a setup? I.e. is the compression ratio better for 1 million similar lines, than say 10,000?
|
It does up to a certain point and this evens out. The compression algorithms have a restriction on the size of the blocks they look at (bzip2) and/or on the tables they keep with information on previous patterns (gzip).
In the case of gzip, once a table is full old entries get pushed out, and compression no further improves. Depending on the your compression quality factor (-0 to -9) and the repetitiveness of your input this filling up can of course can take a while and you might not notice.
| Does gz compression ratio improve over time? |
1,331,533,062,000 |
I have a folder /home/testuser/log which contain log files of one day old *.log. I wish to compress all the log files older than one day to a single zip(gzip or tar.gz) and delete the older files.
I tried to pipeline find and tar commands but didn't work
|
Create tar.gz files older than one day logs
find /home/testuser/log/ -mtime +1 | xargs tar -czvPf /opt/older_log_$(date +%F).tar.gz
Delete older files [ Note:- if below find output is Correct then remove echo , after that it will delete those files]
find /home/testuser/ -mtime +1 | xargs -n1 echo rm
| Compress old log file into single zip-linux |
1,331,533,062,000 |
I have a huge file (420 GB) compressed with gzip and I want to decompress it, but my HDD doesn't have space for storing the whole compressed file and its contents.
Would there be a way of decompressing it 'while deleting it'?
In case it helps, gzip -l says that there is only a file inside (which is a tar file which I will also have to separate somehow)
Thanks in advance!
|
Would there be a way of decompressing it 'while deleting it'?
This is what you asked for. But it may not be what you really want. Use at your own risk.
If the 420GB file is stored on a filesystem with sparse file and punch hole support (e.g. ext4, xfs, but not ntfs), it would be possible to read a file and free the read blocks using fallocate --punch-hole. However, if the process is cancelled for any reason, there may be no way to recover since all that's left is a half-deleted, half-uncompressed file. Don't attempt it without making another copy of the source file first.
Very rough proof of concept:
# dd if=/dev/urandom bs=1M count=6000 | pigz --fast > urandom.img.gz
6000+0 records in
6000+0 records out
6291456000 bytes (6.3 GB, 5.9 GiB) copied, 52.2806 s, 120 MB/s
# df -h urandom.img.gz
Filesystem Size Used Avail Use% Mounted on
tmpfs 7.9G 6.0G 2.0G 76% /dev/shm
urandom.img.gz file occupies 76% of available space, so it can't be uncompressed directly. Pipe uncompressed result to md5sum so we can verify later:
# gunzip < urandom.img.gz | md5sum
bc5ed6284fd2d2161296363edaea5a6d -
Uncompress while hole punching: (this is very rough without any error checking whatsoever)
total=$(stat --format='%s' urandom.img.gz) # bytes
total=$((1+$total/1024/1024)) # MiB
for ((offset=0; offset < $total; offset++))
do
# read block
dd bs=1M skip=$offset count=1 if=urandom.img.gz 2> /dev/null
# delete (punch-hole) blocks we read
fallocate --punch-hole --offset="$offset"MiB --length=1MiB urandom.img.gz
done | gunzip > urandom.img
Result:
# ls -alh *
-rw-r--r-- 1 root root 5.9G Jan 31 15:14 urandom.img
-rw-r--r-- 1 root root 5.9G Jan 31 15:14 urandom.img.gz
# du -hcs *
5.9G urandom.img
0 urandom.img.gz
5.9G total
# md5sum urandom.img
bc5ed6284fd2d2161296363edaea5a6d urandom.img
The checksum matches, the size of the source file reduced from 6GB to 0 while it was uncompressed in place.
But there are so many things that can go wrong... better don't do it at all or if you really have to, at least use a program that does saner error checking. The loop above does not guarantee at all that the data was read and processed before it gets deleted. If dd or gunzip returns an error for any reason, fallocate still happily tosses it... so if you must use this approach better write a saner read-and-eat program.
| Decompress gzip file in place |
1,331,533,062,000 |
Can the initramfs image be compressed by a method other than gzip, such as lzma?
|
Yes. I use in-kernel initrd and it offers at least the following methods:
None (as it is compressed with kernel)
GZip
BZip
LZMA (possibly zen-only)
You can use it on external file and with LZMA (at least on Ubuntu).
Wikipedia states that Linux kernel supports gzip, bzip and lzma (depending, of course, what algorithms are compiled in).
| Can the initramfs image use a compression format other than gzip? |
1,331,533,062,000 |
Is there a way I can diff a gzipped tarball against an existing directory?
I would like to be able to do it without extracting the data from the tarball.
|
Mount the tarball as a directory, for example with AVFS. Then use diff -r on the real directory and the point where the tarball is mounted.
mountavfs
diff -r ~/.avfs/path/to/foo.tar.gz\# real-directory
On GNU/Hurd, the equivalent would be tarfs.
| diff a gzipped tarball against a directory? |
1,331,533,062,000 |
I have a bunch of .jpg.gz files in a directory that I need to decompress.
I know that the decompress command is:
tar -xzvf FileNameHere.jpg.gz
But is there a flag that you can recursively uncompressed the files in a directory? I have over a hundred compressed files and I don't want to manually decompress every single one.
Also since I am SSHing into an hosting service I only have the following commands to use:
arch
bzip2
cal
cksum
cmp
cp
crontab
basename
cd
chmod
ls
date
df
du
dos2unix
unix2dos
file
getfacl
gzip
head
hostid
tail
mkdir
mv
nslookup
sdiff
tar
uptime
wget
whois
unzip
|
To extract your files, you need to use gzip:
gzip -d *.jpg.gz
You mention doing this recursively; given that you don't have find, you'll have to visit each directory in turn and run the above command.
| How to recursively uncompress gz files on a remote host with limited commands? |
1,331,533,062,000 |
my command:
gunzip -c serial2udp.image.gz |
sudo dd of=/dev/mmcblk0 conv=fsync,notrunc status=progress bs=4M
my output:
15930949632 bytes (16 GB, 15 GiB) copied, 1049 s, 15.2 MB/s
0+331128 records in
0+331128 records out
15931539456 bytes (16 GB, 15 GiB) copied, 1995.2 s, 8.0 MB/s
the card: SanDisk Ultra 32GB MicroSDHC Class 10 UHS Memory Card Speed Up To 30MB/s
distribution: 16.0.4 xenial with xfce
kernel version: 4.13.0.37-generic
i understand taking 17 minutes seems reasonable from what I've read. playing with block size doesn't really seem to make much of a difference (bs=100M still exhibits this behaviour with similar timestamps). why do the updates hang and it doesn't produce a finished report for another 16 minutes??
iotop tells me that mmcqd/0 is still running in the background at this point (at 99% IO), so I figure there is a cache somewhere that is holding up the final 5MB but I thought fsync should make sure that doesn't happen
iotop shows no traffic crossing at this time either for dd. ctrl-c is all but useless and i don't want to corrupt my drive after writing to it.
|
I figure there is a cache somewhere that is holding up the final 5MB but I thought fsync should make sure that doesn't happen
conv=fsync means to write back any caches by calling fsync - after dd has written all the data. Hanging at the end is exactly what it will do.
When the output file is slower than the input file, the data written by dd can pile up in caches. The kernel cache can sometimes fill a significant fraction of system RAM. This makes for very misleading progress information. Your "final 5MB" was just an artefact of how dd shows progress.
If your system was indeed caching about 8GB (i.e. half of the 16GB of written data), then I think you either must have about 32GB of RAM, or have been fiddling with certain kernel options. See the lwn.net link below. I agree that not getting any progress information for 15 minutes is pretty frustrating.
There are alternative dd commands you could use. If you want dd to show more accurate progress, you might have to accept more complexity. I expect the following would work without degrading your performance, though maybe reality has other ideas than I do.
gunzip -c serial2udp.image.gz |
dd iflag=fullblock bs=4M |
sudo dd iflag=fullblock oflag=direct conv=fsync status=progress bs=4M of=/dev/mmcblk0
oflag=direct iflag=fullblock avoids piling up kernel cache, because it bypasses it altogether.
iflag=fullblock is required in such a command AFAIK (e.g. because you are reading from a pipe and writing using direct IO). The effect of missing fullblock is another unfortunate complexity of dd. Some posts on this site use this to argue you should always prefer to use a different command. It's hard to find another way to do direct or sync IO though.
conv=fsync should still be used, to write back the device cache.
I added an extra dd after gunzip, to buffer the decompressed output in parallel with the disk write. This is one of the issues that makes the performance with oflag=direct or oflag=sync a bit complex. Normal IO (non-direct, non-sync) is not supposed to need this, as it is already buffered by the kernel cache. You also might not need the extra buffer if you were writing to a hard drive with 4M of writeback cache, but I don't assume an SD card has that much.
You could alternatively use oflag=direct,sync (and not need conv=fsync). This might be useful for good progress information if you had a weird output device with hundreds of megabytes of cache. But normally I think of oflag=sync as a potential barrier to performance.
There is a 2013 article https://lwn.net/Articles/572911/ which mentions minute-long delays like yours. Many people see this ability to cache minutes worth of writeback data as undesirable. The problem was that the limit on the cache size was applied indiscriminately, to both fast and slow devices. Note that it is non-trivial for the kernel to measure device speed, because it varies depending on the data locations. E.g. if the cached writes are scattered in random locations, a hard drive will take longer from repeatedly moving the write head.
why do the updates hang
The fsync() is a single system call that applies to the entire range of the file device. It does not return any status updates before it is done.
| Why does a gunzip to dd pipeline slow down at the end? |
1,331,533,062,000 |
I have some files of which some are very large (like several GB), which I need to concatenate to one big file and then zip it, so something like this:
cat file1 file2 file3 file4 | gzip > compress.gz
which produces extremly high CPU and memory load on the machine or even makes it crash, because the cat generates several GB.
I can't use tar archives, I really need one big chunk compressed by gzip.
How can I produce the same gz file in a sequential way, so that I don't have to cat several GB first, but still have all files in the same .gz in the end?
|
cat doesn't use any significant CPU time (unless maybe on-disk decryption or decompression is involved and accounted to the cat process which is the one reading from disk) or memory. It just reads the content of the files and writes it to the pipe in small chunks in a loop.
However, here, you don't need it. You can just do:
gzip -c file1 file2 file3 file4 > compress.gz
(not that it will make a significant difference).
You can lower the priority of the gzip process (wrt CPU scheduling) with the nice command. Some systems have an ionice command for the same with I/O.
nice -n 19 ionice -c idle pigz -c file1 file2 file3 file4 > compress.gz
On Linux would run a parallel version of gzip with as little impact on the system as possible.
Having compress.gz on a different disk (if using rotational storage) would make it more efficient.
The system may cache that data that cat or gzip/pigz reads in memory if it has memory available to do so. It does that in case you need that data again. In the process, it may evict other cached data that is more useful. Here, that data likely doesn't need to be available.
With GNU dd, you can use iflag=nocache to advise the system not to cache the data:
for file in file1 file2 file3 file4; do
ionice -c idle dd bs=128k status=none iflag=nocache < "$file"
done | nice pigz > compress.gz
| Less resource hungry alternative for piping `cat` into gzip for huge files |
1,331,533,062,000 |
I'm running zgrep on a computer with 16 CPUs, but it only takes one CPU to run the task.
Can I speed it up, perhaps utilize all 16 cores?
P.S The IO is just fine, I could just copy the gzipped file into memory disk
|
You can do as @UlrichDangel suggested in the comments and replace the executable gzip with pigz. If you want something a little less invasive you can also create functions for gzip and gunzip and add them to your $HOME/.bashrc file.
gzip() {
pigz "$@"
}
export -f gzip
gunzip() {
unpigz "$@"
}
export -f gunzip
Now when you run zgrep or zcat it will use pigz instead.
References
Replace bzip2 and gzip with pbzip2 and pigz system wide?
| Speed up zgrep on a multi-core computer |
1,331,533,062,000 |
To get uncompressed size of already compressed file, I can use -l option in gzip utility:
gzip -l compressedfile.gz
However is there a way to get size of compressed file if I am piping the output? For example using this command:
gzip -fc testfile > testfile.gz
or specifically if I am redirecting output somewhere where I might not have direct access (server)
gzip -fc testfile > /dev/null
gzip -fc testfile | ssh serverip "cat > file.gz"
Can this be done? I need either the compression ratio or the compressed size.
|
dd to the rescue.
gzip -fc testfile | dd of=testfile.gz
0+1 records in
0+1 records out
42 bytes (42 B) copied, 0.00018711 s, 224 kB/s
or in your ssh example
gzip -fc testfile | dd | ssh serverip "cat > file.gz"
0+1 records in
0+1 records out
42 bytes (42 B) copied, 0.00018711 s, 224 kB/s
And then just parse the output of the commands using awk or somesuch to pluck out the crucial part of the last line.
| Get compressed size of piped output with gzip? |
1,331,533,062,000 |
I've backed up a partition using sudo dd bs=8M if=/dev/sda2 | gzip > /someFolderOnSDB/sda2.img.gz.
The image is stored on a separate disk sdb.
When restoring it using gunzip -k /mnt/bkp/sda2.img.gz | sudo dd of=/dev/sda2, I noticed that the image is being unzipped into the folder someFolderOnSDB where the gz file is, and I think is simultaneously being written with dd to sda2.
I don't want that. I want the unzipping to happen in memory, rather than on sdb and the portions being unzipped get directly written to sda with dd.
The unzipped image is 300GB in size. I considered using tee or/and the redirect operator >, but am unsure how.
|
You can do this by instructing gunzip to write the decompressed data to its standard output:
gunzip -c /mnt/bkp/sda2.img.gz | sudo dd of=/dev/sda2
| How to uncompress a gzipped partition image and dd it directly to the destination partition without writing to current partition? |
1,331,533,062,000 |
I'm working on big data and I need to archive a directory that is larger than 64 terabytes. I cannot create such large file (archive) on my file system. Unluckily, all proposed solutions for creating a multiple-parts archive on Linux suggest creating an archive first and then splitting it into smaller files with split command.
I know that it is possible with f.e. 7zip, but unluckily I'm quite forced to use tools built in RedHat 6 - tar, gzip, bzip2...
I was wondering about creating a script that would ask user for the maximum volume size. It would archive every single file with gzip, split those files, that are too big and then manually merge them into many tars with the chosen size. Is that a good idea?
Is there any other possibility to achieve big archive division with basic Linux commands?
UPDATE:
I've tested the solution on the filesystem with the restricted maximum file size and it worked. The pipe that redirects the tar output directly into split command has worked as intended:
tar -czf - HugeDirectory | split --bytes=100GB - MyArchive.tgz.
The created files are already small and when merging them together no supersized files are created:
cat MyArchive.tgz* | tar -xzf -
|
If you have enough space to store the compressed archive, then the archive could be created and split in one go (assuming GNU split):
tar -c -vz -f - directory | split --additional-suffix=.gz.part -b 1G
This would create files called xaa.gz.part, xab.gz.part etc., each file being a 1G compressed bit of the tar archive.
To extract the archive:
cat x*.gz.part | tar -x -vz -f -
If the filesystem can not store the compressed archive, the archive parts needs to be written to another filesystem, alternative to some remote location.
On that remote location, for example:
ssh user@serverwithfiles tar -c -vz -f - directory | split --additional-suffix=.gz.part -b 1G
This would transfer the compressed archive over ssh from the machine with the big directory to the local machine and split it.
| Archive big data into multiple parts |
1,331,533,062,000 |
For a while now, I have had the problem that a gzip process randomly starts on my Kubuntu system, uses up quite a bit of resources and causes my notebook fan to go crazy. The process shows up as gzip -c --rsyncable --best in htop and runs for quite a long time. I have no clue what is causing this, the system is a Kubuntu 14.04 and has no backup plan setup or anything like that. Any idea how I can figure out what is causing this the next time the process appears? I have done a bit of googling already but could not figure it out. I saw some suggestions with the ps command but grepping all lines there did not really point to anything.
|
Process tree
While the process is running try to use ps with the f option to see the process hierarchy:
ps axuf
Then you should get a tree of processes, meaning you should see what the parent process of the gzip is.
If gzip is a direct descendant of init then probably its parent has exited already, as it's very unlikely that init would create the gzip process.
Crontabs
Additionally you should check your crontabs to see whether there's anything creating it. Do sudo crontab -l -u <user> where user is the user of the gzip process you're seeing (in your case that seems to be root).
If you have any other users on that system which might have done stuff like setting up background services, then check their crontabs too. The fact that gzip runs as root doesn't guarantee that the original process that triggered the gzip was running as root as well. You can see a list of all existing crontabs by doing sudo ls /var/spool/cron/crontabs.
Logs
Check all the systems logs you have, looking for suspicious entries at the time the process is created. I'm not sure whether Kubuntu names its log files differently, but in standard Ubuntu you should at least check /var/log/syslog.
Last choice: a gzip wrapper
If none of these lead to any result you could rename your gzip binary and put a little wrapper in place which launches gzip with the passed parameters but also captures the system's state at that moment.
| A gzip process regularly runs on my system, how do I figure out what is triggering it? |
1,331,533,062,000 |
I had a use case where I needed to pack a bunch of files into one. And all above commands does the same. I know gzip compresses my files but lets say space is not at all an issue for me then in that case which one should I choose?
Now some would say you would save some time while transferring files on network using compression but unzipping and decompression compensates for the time that I would have saved in transfer. So basically I am unable to choose and decide which of the above tools to choose and when?
|
I had a use case where I needed to pack a bunch of files into one
Ah, you need an archive of files
And all above commands does the same.
Not at all! Some are archivers, some are compressors, some are decompressors, some a combination.
ar: very archaic, use cases are very specific. Pretty certain you don't ever want to use ar yourself.
gzip / gunzip: Not an archiver. Can take a single stream of data and compress it (or decompress it, in case of gunzip). You can use this together with an archiver. Gzip is very old and slow and inefficient, there's alternatives that achieve much higher compression or higher speed, or any mixture of that (e.g., zstd, lz4)
tar: Short for tape archiver; a very common archiver that you can also tell to compress stuff. For example:
tar cf archive.tar file1 file2 file3
creates an uncompressed archive containing file1, file2 and file3. However, adding the z option to the create command (I know, tar's syntax is hellish):
tar czf archive.tar.gz file1 file2 file3
will make tar use gzip internally and create a tar archive that's been compressed.
You can also just pipe the result through any compressor of your choice to get compressed archives, e.g.
tar cf - file1 file2 file3 | gzip > archive.tar.gz # or
tar cf - file1 file2 file3 | zstd > archive.tar.zst # or
tar cf - file1 file2 file3 | lz4 - archive.tar.xz # or
tar cf - file1 file2 file3 | xz > archive.tar.xz
You get the idea.
As common as tar is, it's a very old program and format(s), and it leaves a lot to be desired. But it does correctly deal with Linux file ownership, permissions, links, special files…
zip is a compressing archiver. Works very nicely with windows, as well, but can not deal with file permissions. Hence, not usable for backups!
7z is like zip, a compressing archiver, which cannot deal with user and permission information. Hence, not usable for backups!
mksquashfs is a kind of an archiver, meant for very neatly packed archives, that can also be used like normal file systems. It can use modern, on request very fast or very strong compression.
Now some would say you would save some time while transferring files on network using compression but unzipping and decompression compensates for the time that I would have saved in transfer.
And those people would be right! If you use a modern, speed-optimized compression, you'd be faster than reading or writing from an SSD with decompression. And much, much faster than your network would ever be (unless you are looking at datacenter-grade networking).
So, if speed is your concern, use something that makes use of a fast compressor. As said, gzip is probably not the compressor of choice in 2023, so
tar cf - srces/ | zstd -1 > archive.tar.zst
achieves an archival rate of roughly 3 Gb/s (in case you planned to put this through network, and thought the compressor would be a bottleneck) in my test that uses a mixture of source code, binary files. It makes 1.4 GB out of the original 4.97 GB. Using -2 instead of -1 makes the result another 10% smaller, and reduces the speed to 2.5 Gb/s. Which is still faster than most SATA SSDs could write. And this was single-threaded. Use zstd -2 -T0 to make use of all CPU cores, and my humble PC does 6.5 Gb/s; zstd -4 -T0 still does 2.5 Gb/s, so more than most of my network cards could do, and gets the size down to 1.2 GB :)
So:
Need to archive files, but fast, for sending them to other people who might not have the same software as you? tar cv - files… | zstd -4 -T0 > archive.tar.zst is what you want
Need to archive files, but strongly compressed, for sending them to other people who might not have the same software as you? tar cv - files… | zstd -13 -T0 > archive.tar.zst is slower, but gives very high compression ratios already.
Need to archive files, want to read them later on, without having do un-archive things? mksquashfs files… archive.squashfs -comp=zstd; add -Xcompression-level 4 to the end for higher speed at the expense of size.
The resulting archive.tar.zst files can be directly unarchived with modern GNU tar, tar xf archive.tar.zst; the archive.squashfs can either be mounted directly udisksctl loop-setup -f archive.squashfs and used like a DVD (i.e., you can directly browse the files on it), or de-archived using unsquashfs archive.squashfs
| Difference between ar, tar, gzip, zip and when should I decide to choose which one? |
1,331,533,062,000 |
I am trying to extract a gcc-4.9.0.tar.gz downloaded from one of the gcc mirror site.
In order to check the md5 signature on it before I gunzip it I did
digest -a md5 -v gcc-4.9.0.tar.gz
which gave
md5 (gcc-4.9.0.tar.gz) = fe8786641134178ecfeee2dc7644a0d8
This matches with the md5.sum in the directory downloaded from the source.
Then I did
gzip -dc gcc-4.9.0.tar.gz | tar xvf -
The extraction began but soon terminated with a
tar: directory checksum error
I also tried to gunzip and untar separately like this
gunzip gcc-4.9.0.tar.gz
Then
tar -xvf gcc-4.9.0.tar
but it also ended with a checksum error.
Please How do I resolve this?
|
You need to use gtar, it is usually preinstalled with package SUNWgtar:
gzip -dc gcc-4.9.0.tar.gz | /usr/sfw/bin/gtar -xf -
echo $?
0
Native Solaris unpatched tar may have problems with files created with GNU tar. See answer of @schily why.
| How to correctly extract a .tar.gz with md5.sum on solaris 10 |
1,331,533,062,000 |
Is there a quick and dirty way of estimating gzip-compressibility of a file without having to fully compress it with gzip?
I could, in bash, do
bc <<<"scale=2;$(gzip -c file | wc -c)/$(wc -c <file)"
This gives me the compression factor without having to write the gz file to disk; this way I can avoid replacing a file on disk with its gz version if the resultant disk space savings do not justify the trouble. But with this approach the file is indeed fully put through gzip; it's just that the output is piped to wc rather than written to disk.
Is there a way to get a rough compressibility estimate for a file without having gzip work on all its contents?
|
You could try compressing one every 10 blocks for instance to get an idea:
perl -MIPC::Open2 -nE 'BEGIN{$/=\4096;open2(\*I,\*O,"gzip|wc -c")}
if ($. % 10 == 1) {print O $_; $l+=length}
END{close O; $c = <I>; say $c/$l}'
(here with 4K blocks).
| Estimate compressibility of file |
1,331,533,062,000 |
Let's say that there is one huge db.sql.gz of size 100GB available https://example.com/db/backups/db.sql.gz and the server supports range requests.
So instead of downloading the entire file, I downloaded y bytes(let's say 1024bytes) with an offset of x bytes(let's say 1000bytes) like the following.
curl -r 1000-2024 https://example.com/db/backups/db.sql.gz
With the above command I was able to download the partial content of the gzipped file, now my question is how can I read that partial content?
I tried gunzip -c db.sql.gz | dd ibs=1024 skip=0 count=1 > o.sql but this gives an error
gzip: dbrange.sql.gz: not in gzip format
The error is acceptable since I guess at the top of the file may be there are header blocks which describes encoding.
I noticed that if I'm downloading the file without an offset, I'm able to read the file using gunzip and piping.
curl -r 0-2024 https://example.com/db/backups/db.sql.gz
|
gzip doesn’t produce block-compressed files (see the RFC for the gory details), so it’s not suitable for random access on its own. You can start reading from a stream and stop whenever you want, which is why your curl -r 0-2024 example works, but you can’t pick up a stream in the middle, unless you have a complementary file to provide the missing data (such as the index files created by gztool).
To achieve what you’re trying to do, you need to use block compression of some sort; e.g. bgzip (which produces files which can be decompressed by plain gzip) or bzip2, and do some work on the receiving end to determine where the block boundaries lie. Peter Cock has written a few interesting posts on the subject: BGZF - Blocked, Bigger & Better GZIP!, Random access to BZIP2?
| Reading partially downloaded gzip with an offset |
1,331,533,062,000 |
I'm trying to write a bash script that will go into a directory loop through the .gz files and delete them if they are empty (ie the uncompressed file contained within in empty.
I've got a couple of questions:
Is there a standard file size of a compressed (gz) empty file I can check for?
Or is there a better way to check if a gz contains an empty file
without decompressing it with a bash script?
I was trying to use the following code to acheive this but it relies on the filesize being 0 i think.
for f in dir/*
do
if [[ -s $f ]]
then
do_file_creation
fi
done
|
Unfortunately the gzip contains the orginal filename, so the its size will vary without for different empty files.
gunzip -c $f | head -c1 | wc -c
will echo 1 for files that are non-zero in uncompressed size, and 0 for compressed empty files.
for f in dir/*
do
if [[ $(gunzip -c $f | head -c1 | wc -c) == "0" ]]
then
do_file_creation
fi
done
Might do what you want?
| Unix bash script check empty gz files [duplicate] |
1,331,533,062,000 |
I have a large, repetitive text file. It compresses very well - it's about 3MB compressed. However, if decompressed, it takes 1.7GB. Since it's repetitive, I only need a fraction of the output to check the contents of the file.
It was compressed using gzip. Does gunzip provides any way to only decompress the first few megs of a file?
|
You could decompress to standard output and feed it through something like head to only capture a bit of it:
gunzip -c file.gz | head -c 20M >file.part
The -c flag to head requires the head implementation that is provided by GNU coreutils.
dd may also be used:
gunzip -c file.gz | dd of=file.part bs=1M count=20
Both of these pipelines will copy the first 20 MiB of the uncompressed file to file.part.
| How to decompress only a portion of a file? |
1,331,533,062,000 |
I was able to archive and compress a folder with the following command:
tar -cvfz example2.tgz example1
I then removed the example1 folder and tried to unpack the archive using this command:
tar -xvfz example2.tgz
and tried
tar -zxvf example2.tgz
Niether of these commands worked. The error returned was:
gzip: example2.tgz: not in gzip format
tar: This does not look like a tar archive
tar: Exiting with failure status due to previous errors
It clearly used gzip compression since I passed tar the z qualifier in the initial command. What am I doing wrong? I am on Ubuntu 14.0.4
|
The command you're showing in your first line (tar -cvfz example2.tgz example1) doesn't work and it should not output any file example2.tgz. Didn't you get an error? Perhaps the file example2.tgz existed already? Check if you have a file called z in that folder - that's where the tgz has been saved to, because:
The -f parameter specifies the file which must follow immediately afterwards: -f <file>
Try
tar cvzf exam.tgz example1
| can't decompress .tgz using gunzip |
1,331,533,062,000 |
I have a bunch of gz files and unzipped version of them contains the patterns A and B=1 (these are certainly on different lines where A appears first).
I want to write a command that gives me the content of lines where A is present and where B=1 is present. Or at least the content between A and B=1 inclusively.
Input file1 :
..A ...
...
...B=0..
...
Input file2 :
..A ...
...
...B=1..
...
My command must output A ....B=1 for file2 and nothing for file1.
I did something like this, but is not working as expected:
find . -name \*.gz -print0 | xargs -0 zcat | sed -n -e '/A/,/B=1/p'
What is the problem here?
|
Let's ignore the compression for now. You want to output the lines between A and B=1, but only if both appear. The sed you used will not do that, since it starts outputting as soon as A is seen, and doesn't check for B=1. We could use the hold buffer in sed to keep everything until B=1 is found, but I'm more comfortable with awk, so here:
$ echo -en 'not this\nA\nthis\nB=1\nnot this\n' |
awk '/A/ {save=1} save {data = data $0 ORS} /B=0/ {save=0; data=""} /B=1/ {save=0; printf "%s", data; data=""} '
A
this
B=1
The B=0 rule handles blocks that should not be printed.
Then, handling the compression and multiple files.
The find+xargs you did works, though if some files can have partial blocks (A without B), concatenating the files together will lead to problems. Assuming that's not the case, we can just stick the awk to the end:
$ find . -name foo\*.gz -print0 | xargs -0 zcat | \
awk '/A/ {s=1} s {d = d $0 ORS} /B=0/ {s=0; d=""}
/B=1/ {s=0; printf "%s", d; d=""} '
If we do need to deal with partial blocks, we'll have to handle each file separately:
$ find . -name foo\*.gz -print0 | xargs -0 sh -c '
for f; do zcat "$f" | awk '\''/A/ {s=1} s {d = d $0 ORS}
/B=0/ {s=0; d=""} /B=1/ {s=0; printf "%s", d; d=""} '\''; done' sh
The quoting is horrible, so the awk script should probably to a file of its own.
Or just do it in the shell (Bash/ksh/zsh):
$ shopt -s globstar # set -o globstar in ksh
$ for f in **/*.gz ; do zcat "$f" |
awk '/A/ {s=1} s {d = d $0 ORS} /B=0/ {s=0; d=""}
/B=1/ {s=0; printf "%s", d; d=""} ' ; done
If you want to print only the intervening lines (not the A and B=1 lines), exchange the positions of the /A/ {...} and /B=.../ {...} blocks.
| Sed for gzip files |
1,331,533,062,000 |
File req contains the request header:
GET /cd/E11882_01/server.112/e41084/toc.htm HTTP/1.1^M
Host: docs.oracle.com^M
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8^M
Accept-Language: en-US,en;q=0.5^M
Accept-Encoding: gzip, deflate^M
Connection: keep-alive^M
^M
I run:
cat req | nc docs.oracle.com 80 > resp
resp contains:
HTTP/1.1 200 OK^M
Server: Apache^M
ETag: "726bf43b293f9fc8eac0f8f6b7be3a84:1457459134"^M
Last-Modified: Fri, 04 Mar 2016 14:26:34 GMT^M
Accept-Ranges: bytes^M
Content-Type: text/html^M
Vary: Accept-Encoding^M
Content-Encoding: gzip^M
Date: Sat, 18 Jun 2016 07:04:06 GMT^M
Content-Length: 13163^M
Connection: keep-alive^M
^M
^_<8b>^H^@^@^@^@^@^@^@Å}ysã8<92>ïÿó)¸Þ<88>}3ïµËâMÎvy<83>â%ªuµ(Õ1^[^[
Z¢mvÉ<92>[Gu¹?ýf<82>^D^Hɦ Òîx^[³]¶¬ü^AH$^R<99><89>Dâç^?óÆîìëÄ<97>î^O^Oëë¿ý<8c>ÿHëds÷ñ"Ý\à^Gi²<82>^?^^ÒC^Bß9<^¦¿^_³ï^_/¾\Î<9d>Kwûð<98>^\²<9b>uz!-·<9b>Cº9|¼<88>ü<8f>éê.½ T<9b>ä!ýxñ=KÿxÜî^NÜ^WÿÈV<87>û<8f>«ô{¶L/É/?IÙ&;dÉúr¿LÖéGùCç'é!ù<91>=^\^_èG^Lwy<9f>ìö)à^\^O·<97>^V~|È^NëôÚK^NÉM²O¥ø×<81>4<80>¡^\<93>»T<9a>¦·é.Ý,SéRró^^ì^?¾Ê)N:z<97>nÒ]rØî¸<9e><8e>wÉr<9d>J<9e>3íJ_z³á^@!¾§»Cº<93>þ>Ü®R飴Ú.<8f>^Oðí^?@^CÃtw<97>®¤Oén<9f>m7<92>Ü1õ^Kéê´<9d>Õ^R¨^_ö^_<96>»49¤+®5¥#^[<97>^]ù²£Ïô^?jÆ?^Uë_ϨwÛ<9b>íaÏ^Q%ëue^Sd<94>Üwk8T<89><93><80><»ÍR<9e>7¾&w,í²£U<93>í^KF<8c>o9:h{^Zä4ëlóMÚ¥køð<90> <88>ÜïÒÛ<8f>^W^_>\Áÿ²Í*ýñ^AäòB"ãøxÑÛ>@^_^OO<8f>ðó!ýq¸B¡=Gr·<8f>O»ìîþ^LmµÜ><l7<84>äj _9Aæ<88>^<82>ÿÛÏûå.{<^T^?L^^^_×Ù^Rä^_ð~K¾'ù^_/$i¿[<9e>·÷Ûþ
...continues...
Now, apparently the response body is in gzip format. To decompress it, I have copied the response body to resp-body. So, resp-body contains:
^_<8b>^H^@^@^@^@^@^@^@Å}ysã8<92>ïÿó)¸Þ<88>}3ïµËâMÎvy<83>â%ªuµ(Õ1^[^[
Z¢mvÉ<92>[Gu¹?ýf<82>^D^Hɦ Òîx^[³]¶¬ü^AH$^R<99><89>Dâç^?óÆîìëÄ<97>î^O^Oëë¿ý<8c>ÿHëds÷ñ"Ý\à^Gi²<82>^?^^ÒC^Bß9<^¦¿^_³ï^_/¾\Î<9d>Kwûð<98>^\²<9b>uz!-·<9b>Cº9|¼<88>ü<8f>éê.½ T<9b>ä!ýxñ=KÿxÜî^NÜ^WÿÈV<87>û<8f>«ô{¶L/É/?IÙ&;dÉúr¿LÖéGùCç'é!ù<91>=^\^_èG^Lwy<9f>ìö)à^\^O·<97>^V~|È^NëôÚK^NÉM²O¥ø×<81>4<80>¡^\<93>»T<9a>¦·é.Ý,SéRró^^ì^?¾Ê)N:z<97>nÒ]rØî¸<9e><8e>wÉr<9d>J<9e>3íJ_z³á^@!¾§»Cº<93>þ>Ü®R飴Ú.<8f>^Oðí^?@^CÃtw<97>®¤Oén<9f>m7<92>Ü1õ^Kéê´<9d>Õ^R¨^_ö^_<96>»49¤+®5¥#^[<97>^]ù²£Ïô^?jÆ?^Uë_ϨwÛ<9b>íaÏ^Q%ëue^Sd<94>Üwk8T<89><93><80><»ÍR<9e>7¾&w,í²£U<93>í^KF<8c>o9:h{^Zä4ëlóMÚ¥køð<90> <88>ÜïÒÛ<8f>^W^_>\Áÿ²Í*ýñ^AäòB"ãøxÑÛ>@^_^OO<8f>ðó!ýq¸B¡=Gr·<8f>O»ìîþ^LmµÜ><l7<84>äj _9Aæ<88>^<82>ÿÛÏûå.{<^T^?L^^^_×Ù^Rä^_ð~K¾'ù^_/$i¿[<9e>·÷Ûþ
...continues...
Then I have tried gzip -d resp-body but it does not work.
What should I do in order to decompress the response?
|
Delete the headers and what you'll have left is gzip-compressed data that can be decompressed with gzip -d or zcat.
e.g.
sed -e '1,/^[[:space:]]*$/d' resp | gzip -d > resp.decompressed
The sed script deletes the headers - i.e. everything from the first line to the first empty line (/^[[:space:]]*$/).
The [[:space:]] character-class will make the sed script match empty lines and lines containing only space characters (including carriage-returns, ^M)
BTW, a slightly smarter version of this would extract the Content-Encoding: and Content-Type: headers, and use the mime-type from that to decide whether to use cat, lynx -dump, gzip -d, bzip2 -d, xz -d or whatever else to "decode" the data. But that would probably require writing it in perl.
| How to decompress gzipped HTTP response? |
1,331,533,062,000 |
Scenario: a single 1g CSV.gz is being written to an FTP folder. At the same time, my client machine connects over sFTP to that folder and attempts to pull it down.
Q: After fetching that file, resulting in whatever apparent length I get client-side, can gzip -t detect and fail the partial file, regardless of where the truncation hit?
I figure that unzipping or -t'esting will error out on 99% of the possible truncation points when a fragment ends abruptly, but does the gz structure have clean cleaving points where gzip will accidentally
report success?
mitigations that are not on table (because if one of these were in play, I wouldn't need to ask the above.)
Getting the file length or md5 through another network request.
polling the file length through FTP isn't great, as the server may be sporadically writing chunks to the zip stream. Until the batch job has closed the file handle, it would be lethal to my analysis to mistake that for a complete data set.
being given the final file length or hash by the batch job removes the need for this Q, but that puts an implementation burden on a team that (for this Q's purposes), may as well not exist.
we can't avoid the racing by scheduling reads/writes for different times of day.
the server is not using atomic move operations.
I don't know the CSV row/column counts; it'll change per snapshot and per integration made. May as well say the file being gzipped is a opaque binary blob for this Q.
there are no client=>sFTP network errors in play. (those are caught and handled; my concern is reading a file that's still being sporadically written during the server's batch job.)
Using a RESTful API instead of sFTP.
didn't find an existing SO
Several SO touch on handling truncations, but are in a lossy-acceptable context, as compared to needing to reliably fail the whole workflow on any problem. (I'm computing in a medical-data context, so I'd rather have the server halt and catch fire than propagate incorrect statistics.)
gzip: unexpected end of file with - how to read file anyway is the reverse -- they want to suppress EOF errors as not a problem for their use case
Why I get unexpected end of file in my script when using gzip? is just posix stream ends intentionally inserted by head and doesn't cover "is it possible for a false-positive to slip in?"
zcat / gzip error while piping out is very close, but doesn't ask "am I guaranteed to get this error?"
Merge possibly truncated gzipped log files is also close, as it deals with partial files from terminated batch jobs, but is still about throwing out a few unreadable rows, not about guaranteeing errors.
|
Files in gzip format contain the length of the compressed data and the length of uncompressed data. However, this is an ancient format and the length fields only have 32 bits, so nowadays they're interpreted as being the length modulo 2^32 (i.e. 4 GiB). Before decompression, gzip checks that the checksum of the compressed data is correct. After decompression, gzip checks that the checksum of the decompressed data is correct, and that the size of the decompressed data is correct modulo 2^32.
As a consequence, gzip is guaranteed to detect a truncated input if the size of the compressed data (or the size of the decompressed data) is less than 4 GiB. However, for arbitrary-sized files, I don't see any reason why those checks would be enough. If the input is not deliberately crafted and its length is uniformly distributed modulo 4 GiB, there's only a 1/2^64 chance that both the compressed length and the checksum match, and additionally an error will be detected if the file is truncated in the middle of a multi-byte sequence or if the length of the uncompressed data doesn't match. (That doesn't necessarily reduce the chance to 1/2^96 because the compressed length modulo 2^32 and the uncompressed length modulo 2^32 are correlated.) So there's only a tiny chance of an undetected error, but it's nonzero, and I'm sure it could be crafted deliberately.
Note that this analysis only applies if the gzipped file consists of a single stream. gunzip can uncompress files that consists of multiple concatenated streams, and there's no way to detect if the file contains a valid sequence of streams but more streams were intended. However, your production chain probably doesn't generate multi-stream files: gzip doesn't do it by itself, you'd have to concatenate the output of multiple runs manually, or use some other tool (pkzip?).
the server is not using atomic move operations.
Unfortunately I don't think there's a completely reliable way to detect errors without either that or an external piece of metadata (length or cryptographic checksum) which is calculated after the server has finished writing.
| Can gzip -t detect 100% of truncated-download errors? |
1,331,533,062,000 |
I have a collection of gzipped files that I want to combine into a single file. They each have identical format. I want to keep the header information from only the first file and skip it in the subsequent files.
As a simple example, I have four identical files with the following content:
$ gzcat file1.gz
# header
1
2
I want to end up with
# header
1
2
1
2
1
2
1
2
In reality, I can have a varying number of files so I would like to be able to do this programatically. Here is the non-programatic solution I have so far...
cat <(gzcat file1.gz) <(tail -q -n +2 <(gzcat file2.gz) <(gzcat file3.gz) <(gzcat file4.gz))
This command works, but it is “hard coded” to handle four files,
and I need to generalize it for any number of files.
I am using bash as the shell if that helps. My preference is for performance (in reality the files can be millions of lines long), so I am OK with a less-than-elegant solution if it is speedy.
|
If the command that you show in your question basically works (for a hard-coded number of files), then
first=1
for f in file*.gz
do
if [ "$first" ]
then
gzcat "$f"
first=
else
gzcat "$f"| tail -n +2
fi
done > collection_single_file
should work for you.
I hope the logic is fairly clear.
Look at all the files (change the wildcard as appropriate for your file names).
If it’s the first one in the list, gzcat it, so you get the entire file
(including the header).
Otherwise, use tail to strip the header.
After you’ve handled a file, then no other file will be the first.
This invokes tail N−1 times, instead of just once (like your answer).
Aside from that, my answer should perform the same as your answer.
| Concatenate multiple zipped files, skipping header lines in all but the first file |
1,331,533,062,000 |
How do I go about compressing a folder (tar1) and sending that compressed folder to a different directory (lets say called tar2).
I checked the questions on here, and most of them are using a file, not a directory and I've been trying all combinations, but I can't get it to work right.
In my ~ I have:
tar1/
a.txt
b.txt
tar2/
Tar1/ has a test file called a.txt and b.txt in it. I want to compress that folder and place it in tar2/
So afterwards it would look like:
tar1/
a.txt
b.txt
tar2/
tar1.gz
a.txt
b.txt
I was trying
tar czvf tar2/ tar1/
(A reference book I had gave an example in that syntax, where the first path is the place you want to store it, and the 2nd path is what you want to create the archive of.
I've also tried somethings with -C as my destination:
tar cvzf tar1/ -C tar2/
Hoping that that would take tar1/ compress the directory and place that in tar2
I got this error
tar: Cowardly refusing to create an empty archive
Try tar --help' or tar --usage' for more information.
I'm on RHEL 6.7
|
You are not specifying an archive in your statements.
It should look something like:
tar -cvf tar2/tar1.tar tar1/
This places the tarball tar1.tar inside the directory tar2/.
Before:
tree tar*
tar1
├── a.txt
└── b.txt
tar2
├── a.txt
└── b.txt
0 directories, 4 files
After:
tar -cvf tar2/tar1.tar tar1/
tar1/
tar1/a.txt
tar1/b.txt
tree tar*
tar1
├── a.txt
└── b.txt
tar2
├── a.txt
├── b.txt
└── tar1.tar
0 directories, 5 files
Environment:
Distributor ID: Debian
Description: Debian GNU/Linux 8.6 (jessie)
Release: 8.6
Codename: jessie
| How to tar a directory to a different directory? |
1,331,533,062,000 |
I've been trying to save space on my linux server, and I had a folder containting, in subfolders, 22GB of images.
So I decided to compress them.
First I used tar:
tar -zcf folder.tar folder
Then gzip
gzip folder
And finally, for good measure, just in case, bzip2
bzip2 folder
And after all that, the total of all the folder.tar.gz.bzip2s, came to still 22GB! With, using finer precision, a 1% space saving!
Have I done something wrong here? I would expect many times more than a 1% saving!
How else can I compress the files?
|
Compression ratio is very dependent of what you're compressing. The reason text compresses down so well is because it doesn't even begin to fully utilize the full range of numbers representable in the same binary space. So formats that do (e.g compressed files) can store the same information in less space just by virtue of using all those binary numbers that mean nothing in textual encodings and can effectively represent whole progressions of characters in a single byte and get a good compression ratio that way.
If the files are already compressed, you're typically not going to see much advantage to compressing them again. If that actually saved you additional space it's probably an indication that the first compression algorithm kind of sucks. Judging from the nature of the question I'm going to assume a lot of these are media files and as such are already compressed (albeit with algorithms that prioritize speed of decompression) and so you're probably not going to get much from them. Sort of a blood from a stone scenario: they're already as small as they could be made without losing information.
If I'm super worried about space I just do a "bzip2 -9" and call it good. I've heard good things about the ratio on XZ though. I haven't used XZ myself (other than to decompress other people's stuff), but it's supposed to have a better ratio than bzip2 but take a little longer to compress/decompress.
| Triple compression and I only save 1% in space? |
1,561,988,135,000 |
I am new to these commands. I am trying to gzip a local folder and unzip the same on the remote server. The thing is, gzipping and unzip must happen on the fly. I tried many and one of the closest I believe is this:
tar cf dist.tar ~/Documents/projects/myproject/dist/ | ssh [email protected]:~/public_html/ "tar zx ~/Documents/projects/myproject/dist.tar"
As you can see above, I am trying to send out the dist folder to the remote server, but before that I am trying to compress the folder on the fly (looks like that is not happening in above command).
local folder: ~/Documents/projects/myproject/dist/
remote folder: ~/public_html (directly deploying to live)
Of course, the gzip created file must not be there, it should happen on the fly.
My intention is to run the above like through a file like sh file.command. In other words, I am trying to deploy my compiled project which is in dist folder, to live when the sh command is executed. I don't want to do this manually every time I make a change in my project.
|
If you have rsync then use that instead, as it makes use of existing files to allow it to transfer only differences (that is, parts of files that are different):
rsync -az ~/Documents/projects/myproject/dist/ [email protected]:public_html/
Add the --delete flag to to completely overwrite the target directory tree each time. If you want to see what's going on, add -v.
If you don't have rsync, then this less efficient solution using tar will suffice:
( cd ~/Documents/projects/myproject/dist && tar czf - . ) |
ssh [email protected] 'cd public_html && tar xzf -'
Notice that the writing and reading of the compressed tarball is via stdout and stdin (the - filename). If you were using GNU tar you could use the -C to set a correct directory before processing, whereas here we've used old-fashioned (traditional?) cd. Add the v flag (on the receiving side) to see what's going on, i.e. tar xzvf ....
| gzip compress a local folder and extract it to remote server |
1,561,988,135,000 |
I want to rename .gz files according to names in separate txt-file.
I have a map with .gz files with the names:
trooper10.gz
trooper11.gz
trooper12.gz
etc.
and I have a separate txt-file with the wanted name(s) in in the first column and the .gz-names in the other column (tab-separated).
B25 trooper10
C76 trooper11
A87_2 trooper12
So the files should be renamed like this
B25.gz
C76.gz
A87_2.gz
I tried
for i in *.gz; do
line=$(grep -x -m 1 -- "${i}" /path_to_txtfile/list_names.txt)
But im not sure how to grep the corresponding column in the txt-file. Since there is many gz-files I want to ask if there is any way to this?
|
Yes, you just start by reading the file instead of getting the gz files from the file system:
while IFS=$'\t' read -r newName oldName; do
mv -- "$oldName".gz "$newName.gz"
done < names_file
| Rename .gz files according to names in separate txt-file |
1,561,988,135,000 |
I have experience with the Windows version:
GNU Wget 1.19.4 built on mingw32.
But now on Ubuntu, I have:
GNU Wget 1.19.4, a non-interactive network retriever.
There is a relatively new option called --compression that was added in 1.19.2:
New option --compression for gzip Content-Encoding
And in 1.19.4 release notes this is also talked about.
When running wget -h the --compression is missing on the Ubuntu version. How could I get a version that has it, or how do I enable the possibility of using it? When I try to run an actual command it just says:
wget: unrecognized option '--compression=auto'
|
--compression is only available if wget is built with zlib (a library used for compression and decompression). The wget package in Debian didn’t explicitly build-depend on that library, it gets it indirectly, via another library, the GNU TLS library; the Ubuntu build drops the latter, and ends up losing support for compression as a result.
You can see this in the build logs:
checking for ZLIB... no
checking for compress in -lz... no
Rebuilding the package to enable --compression can be done as follows:
cd /tmp
apt source wget
cd wget-1.19.4
apt build-dep wget
apt install zlib1g-dev
dch -n "Rebuild with compression support."
dpkg-buildpackage -us -uc
This will produce a package in /tmp, which you can then install with dpkg -i. You might need to adjust the cd step above, depending on the version of wget your repositories contain. You might also need to add deb-src lines in /etc/apt/sources.list, matching your deb lines, to be able to download the source code using apt source.
Note that enabling compression by default (--compression=auto) can have surprising consequences, which is why the release notes mention that
As it turns out, implementing gzip support is not trivial; especially in the
face of many buggy servers that we have to support. Hence, for the time being,
connection compression support has been marked as experimental and disabled by
default.
See Debian bug 887910 for an example. I filed Debian bug 907047 and Ubuntu bug 1788608 asking for a fix; as a result, an explicit dependency on zlib was added in version 1.19.5-2 of the Debian package (present in Debian 10 and later), and imported into Ubuntu (19.04 and later).
| How could --compression be missing from my wget? |
1,561,988,135,000 |
The other day I was collecting some logs from a remote server and unthinkingly gzipped the files into a single file, rather than adding the directory to a tarball. I can manually separate out some of the log files, but some of them were already gzipped. So the original files look like:
ex_access.log
ex_access.log.1.gz
ex_access.log.2.gz
ex_debug.log
ex_debug.log.1.gz
ex_debug.log.2.gz
ex_update.log
ex_update.log.1.gz
ex_update.log.2.gz
and are compressed into exlogs.gz, which upon decompression is, as you would expect, one file with all the original files concatenated. Is there a way to separate out the original gz files so that they can be decompressed normally instead of printing out the binary:
^_<8B>^H^H<9B>C<E8>a^@
^Cex_access.log.1^@<C4><FD><U+076E>-Kr<9D> <DE><F7>S<9C>^W<E8><CE><F0><FF><88>y[<D5><EA>+<A1>^EHuU<A8>^K<B6><94><AA>L4E^R̤^Z^B<EA><E1><DB>}<AE>̳<B6><D6>I<C6><F8><9C><DB><C6>
<F1>@G`<E6><D6><FE><E0>3<C2><C3>ٰ̆|<E4><FC><BB>#<FD><EE><B8>~9<EA>+<A7>W+<FF><FB><FF><F6><9F><FE><97><FF><E3><97><FF><FD>^Z<E3><FF><F8><E5><FF><FE><CB><C7><FF>Iy<FC>?<8E><F9>?<F3>?<EF><B5><F7><F9><BF><FF>ß<FF>
[etc]
Yes, I could just collect the logs again (since I did have the sense to leave the originals intact), but getting approval for access to the server is a pain and I'd like to avoid it if at all possible.
Edit: the command I used is
gzip -c ex_* > exlogs.gz
|
As it happens, in gzip -c file1 file2 > result, gzip does create two separate compressed streams one for each file and even stores the file name and modification time of the file.
It doesn't let you use that information upon decompression, but you could use perl's IO::Uncompress::Gunzip module instead to do that. For instance with:
#! /usr/bin/perl
use IO::Uncompress::Gunzip;
$z = IO::Uncompress::Gunzip->new("-");
do {
$h = $z->getHeaderInfo() or die "can't get headerinfo";
open $out, ">", $h->{Name} or die "can't open $h->{Name} for writing";
print $out $buf while $z->read($buf) > 0;
close $out;
utime(undef, $h->{Time}, $h->{Name}) or warn "can't update $h->{Name}'s mtime";
} while $z->nextStream;
And calling that script as that-script < exlogs.gz, it would restore the files with their original names and modification time (without the sub-second part which is not stored by gzip) in the current working directory.
| Un-concatenate gzipped file |
1,561,988,135,000 |
I have a folder containing a large amount of symlinked files. These files are each on the order of 10-11GB (fastq files to be specific). They come from a variety of source folders, but I made sure there's only one level of symlinks.
I'm trying to gzip them by simply doing:
gzip *.fastq
That results in a bunch of
too many levels of symbolic links
And thus fails.
However, when I do:
for i in `ls | egrep *.fastq$`; do gzip -c $i > $i.gz; done;
it does work. My question is simple. What is the difference between those? AFAIK, the only difference is that the second approach starts a new gzip process for each file, whereas the first one should do everything in one process. Can gzip only handle one symlinked file at a time? Doing the same on a test folder with normal files works both ways.
|
A quick check of the gzip source (specifically, gzip 1.6 as included in Ubuntu 14.04), shows that the observed behavior comes from the function open_and_stat, beginning at line 1037 of gzip.c:
static int
open_and_stat (char *name, int flags, mode_t mode, struct stat *st)
{
int fd;
/* Refuse to follow symbolic links unless -c or -f. */
if (!to_stdout && !force)
{
if (HAVE_WORKING_O_NOFOLLOW)
flags |= O_NOFOLLOW;
else
{
#if HAVE_LSTAT || defined lstat
if (lstat (name, st) != 0)
return -1;
else if (S_ISLNK (st->st_mode))
{
errno = ELOOP;
return -1;
}
#endif
}
}
fd = OPEN (name, flags, mode);
if (0 <= fd && fstat (fd, st) != 0)
{
int e = errno;
close (fd);
errno = e;
return -1;
}
return fd;
}
Note that the comment line states that gzip will not follow symlinks unless it is called with the -c or -f flags, and inside the #if ... #endif the errno variable is set to ELOOP (too many symbolic links encountered) if the file to be compressed is actually a symlink.
Now, from the gzip(1) man page, the -c and -f flags are:
-c --stdout --to-stdout
Write output on standard output; keep original files unchanged. If there are
several input files, the output consists of a sequence of independently com‐
pressed members. To obtain better compression, concatenate all input files
before compressing them.
-f --force
Force compression or decompression even if the file has multiple links or the
corresponding file already exists, or if the compressed data is read from or
written to a terminal. If the input data is not in a format recognized by gzip,
and if the option --stdout is also given, copy the input data without change to
the standard output: let zcat behave as cat. If -f is not given, and when not
running in the background, gzip prompts to verify whether an existing file
should be overwritten.
Putting all together and going back to the original question:
The first example fails because it is trying to compress the actual symlink (even if it is not an actual link loop)
The second uses the -c flag, so it is reading the contents of the original file and then writing the compressed output to stdout, so it succeeds.
A third scenario is using -f instead of -c. In this case, gzip does not complain when tryng to compress a symlink, but upon decompression it becomes a regular file, as shown:
$ ls -l
total 4
-rw-rw-r-- 1 x86tux x86tux 13 Jun 16 13:10 realfile.txt
lrwxrwxrwx 1 x86tux x86tux 12 Jun 16 23:40 symlink.txt -> realfile.txt
$ gzip symlink.txt
gzip: symlink.txt: Too many levels of symbolic links
$ gzip -f symlink.txt
$ ls -l
total 8
-rw-rw-r-- 1 x86tux x86tux 13 Jun 16 13:10 realfile.txt
-rw-rw-r-- 1 x86tux x86tux 45 Jun 16 13:10 symlink.txt.gz
$ gunzip symlink.txt.gz
$ ls -l
total 8
-rw-rw-r-- 1 x86tux x86tux 13 Jun 16 13:10 realfile.txt
-rw-rw-r-- 1 x86tux x86tux 13 Jun 16 13:10 symlink.txt
$ md5sum *
618f486e0225d305d16d0648ed44b1eb realfile.txt
618f486e0225d305d16d0648ed44b1eb symlink.txt
| Gzip large amount of symlinked files |
1,561,988,135,000 |
I'm trying to mirror a site but the server only responses with gzip pages so wget won't recurse. I've searched around and there are some references to a patch to add gzip support to wget however they seem to be out of date. Is there anyway to do this? If not I was considering reverse proxying it through nginx.
|
You have 4 ways:
wget one page, gunzip it and process it again from the html... iterate until finished:
wget -m http://example.org/page.html
find . -name \*gz -exec gzip -d {} \;
find . -name \*html -exec wget -M -F {} \;</code></pre>
This will be slow, but should work.
Install Privoxy and configure it to uncompress the requested pages:
+prevent-compression
Prevent the website from compressing the data. Some websites do that, which is a problem for Privoxy when built without zlib support, since +filter and +gif-deanimate will not work on compressed data. Will slow down connections to those websites, though.
Privoxy or another proxy might also be able to get the compressed pages and deliver the uncompressed copy to the client; Google for it.
My wget wont send the "Accept-Encoding: gzip" header that requests gzip content... Check why yours does it. Maybe you have a proxy that is adding it?
You can also use Privoxy to remove that header.
| mirror a site with wget that only response with gzip |
1,561,988,135,000 |
I have a Python daemon running on CentOS that opens a file at the beginning of a session, and keeps writing to it.
A cronjob however, gzipped the file that was being written to, and so the file moved from log.txt to log.txt.gz. The daemon however kept writing to log.txt. The daemon then has been stopped and closed the file descriptor to log.txt.
Is there any way to recover the data that was written by the daemon to log.txt after the file was moved to log.txt.gz?
|
AFAICT, no. The problem is that the gzip process will create a new file and will free the previous (the unzipped) one, including a removal from the directory. If no other hard-link in the filesystem is pointing to the file it will get lost once the last file descriptor refering to it is closed.
For the future you'd be advised to synchronize the access to that file ressource instead of letting two processes simultaneously access the file (for writing and resp. deleting).
Another option is to let gzip create a zipped copy. But then you'd have a race condition where not all written contents in the file might get into the gz file.
| Recovering data written to file that moved |
1,561,988,135,000 |
Filenames are ones shown below, but the directory will be libgit2-0.21-1 when tar xvf v0.21.1.tar.gz, so how use one-liner to get the directory name(report error if more than two directory exist):
tar tvf v0.21.1.tar.gz | head
drwxrwxr-x root/root 0 2014-08-05 08:09 libgit2-0.21.1/
-rw-rw-r-- root/root 1169 2014-08-05 08:09 libgit2-0.21.1/.HEADER
-rw-rw-r-- root/root 22 2014-08-05 08:09 libgit2-0.21.1/.gitattributes
-rw-rw-r-- root/root 321 2014-08-05 08:09 libgit2-0.21.1/.gitignore
-rw-rw-r-- root/root 1246 2014-08-05 08:09 libgit2-0.21.1/.mailmap
I have one solution for this but obviously this is not the best one(I also have to check if there is more than one directory at the top):
mkdir libgit2 && tar xvf v0.21.1.tar.gz -C libgit2 --strip-components 1
|
Execute the following script with your tarball as command line parameter
#!/bin/bash
DIR=$(tar tvf ${1} | egrep -o "[^ ]+/$")
if [ $(echo ${DIR} | egrep -o " " | wc -l) -eq 0 ]; then
echo ${DIR};
else
echo "ERROR: multiple directories in tarball base"
exit 1
fi
| command to get topmost directory name in compressed files |
1,561,988,135,000 |
Creating a git repository for testing.
~ $ mkdir somefolder
~ $ cd somefolder/
~/somefolder $ git init
Initialized empty Git repository in /home/user/somefolder/.git/
~/somefolder $ echo test > xyz
~/somefolder $ mkdir somefolder2
~/somefolder $ echo test2 > ./somefolder2/zzz
~/somefolder $ git add *
~/somefolder $ git commit -a -m .
[master (root-commit) 591fda9] .
2 files changed, 2 insertions(+)
create mode 100644 somefolder2/zzz
create mode 100644 xyz
When turning the whole repository into a tar.gz, it results in a determinstic file. Example.
~/somefolder $ git archive \
> --format=tar \
> --prefix="test/" \
> HEAD \
> | gzip -n > "test.orig.tar.gz"
~/somefolder $ sha512sum "test.orig.tar.gz"
e34244aa7c02ba17a1d19c819d3a60c895b90c1898a0e1c6dfa9bd33c892757e08ec3b7205d734ffef82a93fb2726496fa16e7f6881c56986424ac4b10fc0045 test.orig.tar.gz
Again.
~/somefolder $ git archive \
> --format=tar \
> --prefix="test/" \
> HEAD \
> | gzip -n > "test.orig.tar.gz"
~/somefolder $ sha512sum "test.orig.tar.gz"
e34244aa7c02ba17a1d19c819d3a60c895b90c1898a0e1c6dfa9bd33c892757e08ec3b7205d734ffef82a93fb2726496fa16e7f6881c56986424ac4b10fc0045 test.orig.tar.gz
Works.
But when changing a minor detail, when only compressing a sub folder, it does not end up with a deterministic file. Example.
~/somefolder $ git archive \
> --format=tar \
> --prefix="test/" \
> HEAD:somefolder2 \
> | gzip -n > "test2.orig.tar.gz"
~/somefolder $ sha512sum "test2.orig.tar.gz"
b523e9e48dc860ae1a4d25872705aa9ba449b78b32a7b5aa9bf0ad3d7e1be282c697285499394b6db4fe1d4f48ba6922d6b809ea07b279cb685fb8580b6b5800 test2.orig.tar.gz
Again.
~/somefolder $ git archive \
> --format=tar \
> --prefix="test/" \
> HEAD:somefolder2 \
> | gzip -n > "test2.orig.tar.gz"
~/somefolder $ sha512sum "test2.orig.tar.gz"
06ebd4efca0576f5df50b0177d54971a0ffb6d10760e60b0a2b7585e9297eef56b161f50d19190cd3f590126a910c0201616bf082fe1d69a3788055c9ae8a1e4 test2.orig.tar.gz
No deterministic tar.gz this time for some reason.
How to create a deterministic tar.gz using git-archive when just wanting to compress a single folder?
|
When you do a simple export with HEAD, an internal timestamp is initialized based on the commit's timestamp. When you use more advanced filtering options, the timestamp is set to the current time. To change the behavior, you need to fork/patch git and change the second scenario, eg proof of concept:
diff --git a/archive.c b/archive.c
index 94a9981..0ab2264 100644
--- a/archive.c
+++ b/archive.c
@@ -368,7 +368,7 @@ static void parse_treeish_arg(const char **argv,
archive_time = commit->date;
} else {
commit_sha1 = NULL;
- archive_time = time(NULL);
+ archive_time = 0;
}
tree = parse_tree_indirect(sha1);
| How to create a deterministic tar.gz using git-archive? |
1,561,988,135,000 |
I have a big .gz file, which is 2.6 GB in itself. I cannot uncompress it due to size limitation. The file is a single large text file. I am not being able to decompress it completely due to size limitation. I want to split it into say 10 individual parts and decompress each one individually so that I can use each individual files:
My questions are:
Is that possible ?
Also, as part of the answer, if the commands can also be provided as I am not very well versed in these commands
Thanks
|
The gzip compression format supports decompressing a file that has been concatenated from several smaller compressed files (the decompressed file will then contain the concatenated decompressed data), but it doesn't support decompressing a cut up compressed file.
Assuming you would want to end up with a "slice" of the decompressed data, you may work around this by feeding the decompressed data into dd several times, each time selecting a different slice of the decompressed data to save to a file and discarding the rest.
Here I'm using a tiny example text file. I'm repeatedly decompressing it (which will take a bit of time for large files), and each time I pick a 8 byte slice out of the decompressed data. You would do the same, but use a much larger value for bs ("block size").
$ cat file
hello
world
1
2
3
ABC
$ gzip -f file # using -f to force compression here, since the example is so small
$ gunzip -c file.gz | dd skip=0 bs=8 count=1 of=fragment
1+0 records in
1+0 records out
8 bytes transferred in 0.007 secs (1063 bytes/sec)
$ cat fragment
hello
wo
$ gunzip -c file.gz | dd skip=1 bs=8 count=1 of=fragment
1+0 records in
1+0 records out
8 bytes transferred in 0.000 secs (19560 bytes/sec)
$ cat fragment
rld
1
2
(etc.)
Use a bs setting that is about a tenth of the uncompressed file size, and in each iteration increase skip from 0 by one.
UPDATE: The user wanted to count the number of lines in the uncompressed data (see comments attached to the question). This is easily accomplished without having to store any part of the uncompressed data to disk:
$ gunzip -c file.gz | wc -l
gunzip -c will decompress the file and write the uncompressed data to standard output. The wc utility with the -l flag will read from this stream and count the number of lines read.
| Split gz file and decompress individually [duplicate] |
1,561,988,135,000 |
I'm attempting to extract the contents of some files by alphabetical (which in this case also means date and iteration) order and when I test the process first with ls:
$ find /opt/minecraft/wonders/logs/ -name 20* -type f -mtime -3 -print0 \
| sort | xargs -r0 ls -l | awk -F' ' '{print $6 " " $7 " " $9}'
I get a positive result:
Aug 18 /opt/minecraft/wonders/logs/2018-08-17-3.log.gz
Aug 18 /opt/minecraft/wonders/logs/2018-08-18-1.log.gz
Aug 19 /opt/minecraft/wonders/logs/2018-08-18-2.log.gz
Aug 19 /opt/minecraft/wonders/logs/2018-08-19-1.log.gz
Aug 20 /opt/minecraft/wonders/logs/2018-08-19-2.log.gz
Aug 20 /opt/minecraft/wonders/logs/2018-08-20-1.log.gz
However, when I go to actually extract the files the sort order is lost:
$ find /opt/minecraft/wonders/logs/ -name 20* -type f -mtime -3 -print0 \
| sort | xargs -r0 gunzip -vc | grep "\/opt.*"`
/opt/minecraft/wonders/logs/2018-08-18-1.log.gz: 66.8%
/opt/minecraft/wonders/logs/2018-08-18-2.log.gz: 83.1%
/opt/minecraft/wonders/logs/2018-08-19-1.log.gz: 70.3%
/opt/minecraft/wonders/logs/2018-08-19-2.log.gz: 72.9%
/opt/minecraft/wonders/logs/2018-08-20-1.log.gz: 73.3%
/opt/minecraft/wonders/logs/2018-08-17-3.log.gz: 90.2%
How can I maintain the sort order while unzipping these files?
|
You have used the -print0 option with find, and -0 with xargs, but you forgot to use -z for sort, so sort essentially sees a single line (unless your filenames contain \n). The output you see with ls is probably ls doing some sorting.
find /opt/minecraft/wonders/logs/ -name '20*' -type f -mtime -3 -print0 |
sort -z | xargs -r0 gunzip -vc | grep /opt
(Note: 20* is a glob and needs to be quoted for the shell so it's passed literally to find, you don't want to escape / for grep, what that does is unspecified, no need for .* at the end of the regexp if all you want is print the matching line)
| How to maintain sort order with xargs and gunzip |
1,561,988,135,000 |
I have a large collection of gz files. I want to extract them all. Here is what I was trying to do:
find . | grep .gz | gunzip
However, gunzip does not accept list of files from stdinput. How can I decompress them all(in place)?
|
If what you are after is to call gunzip on every file with a name
ending in .gz anywhere within your current directory, this should do
it:
find . -type f -name '*.gz' exec gunzip {} +
The more general way to turn what is on standard input into arguments
to a command is to use xargs, but there are a few gotchas to be aware
of with that command.
| gunzip from stdin |
1,561,988,135,000 |
I have a MySQL database backup in a gz file. When trying to uncompress it I get the following:
gzip: db_stepup.sql.gz: not in gzip format
I read that sometimes is just a matter of removing the gz extension. So I did that and I'm able to see the file up a point.
(352, 'bs', 'lv', 'Bosnian'),
(353, 'bs', 'lt', 'Bosnian'),
(354, 'bs', 'mk', 'Bosnian'),
(355, 'bs', 'mt', 'Bosnian')\8B\00\00\00\00\00\00}\9Dۓו\EE\DF\CF_\C1 \DBx"<\95\F7\CC8O3c\CF\D8\C7c\8F\E3\D8s&Γ]MWW\95*\B3\AA\AB2\AB\DDO@\83\84A\B2el\A1;\81$\8B\ABԒ<\B4\F4
\AD' \CEI6\D2\FFp2\F7\DA{\FB\B2 <nk\F4\ADܙ\F5[;\F7\FAv\DE\F6\F7\FF \C7\F7\A2$\FD\FE\BEX\A9\FF\A1\FD[ͺ\BF\FF2\AB\A7\E3\FE\F4\FE\F1\FB\9D\9AA\9D\FAj\AE\D5\E9\C0W\A8\A5V\EB\FD#\92\D3\E4o\E34\D0\EAz\DFWC\A8\A2\E9\95\D9\AF\B4\F2\FE\C9XDh\FEi\BD-\EC^i\9B\98ɀ\D8XY\F8\89\98/\FD\00\B5\85Am\FF\E7\DBR\B7\85\D8z\EF\F7{7\BE<\B8\F7\D9\DE\CE\DE\C7\ED\FF~\D2\FD\AFĺ\F4w\88\B5\9F\9E빯\82a\BD\F0U0\AC7\90\9EI_\CA \D8\F8
At this point the file gets scrambled. It looks like a problem of text encoding, is there a way to recover the data in the file?
Here's the file if you want to take a look at it
|
The file starts out in plain ASCII so it's uncompressed.
$ hexdump -C db_stepup.sql.gz | less
00000000 2d 2d 20 70 68 70 4d 79 41 64 6d 69 6e 20 53 51 |-- phpMyAdmin SQ|
00000010 4c 20 44 75 6d 70 0a 2d 2d 20 76 65 72 73 69 6f |L Dump.-- versio|
00000020 6e 20 34 2e 31 2e 31 34 2e 38 0a 2d 2d 20 68 74 |n 4.1.14.8.-- ht|
[...]
That goes on for a while until somewhere in the middle it turns binary.
00012390 27 2c 20 27 6d 74 27 2c 20 27 42 6f 73 6e 69 61 |', 'mt', 'Bosnia|
000123a0 6e 27 29 1f 8b 08 00 00 00 00 00 00 03 7d 9d db |n')..........}..|
000123b0 93 14 d7 95 ee df cf 5f c1 db 78 22 3c 11 95 f7 |......._..x"<...|
It starts with 1f 8b 08 ... (which for some reason did not show as such in the output you posted), it could be a valid gzip header. The starting point is 000123a3 so let's split it off...
$ dd if=db_stepup.sql.gz bs=$((0x000123a3)) skip=1 | gunzip | less
,
(356, 'bs', 'mo', 'Bosnian'),
(357, 'bs', 'mn', 'Bosnian'),
(358, 'bs', 'ne', 'Bosnian'),
[...]
And hey, that seems to be the data where it left off. For some strange reason phpMyAdmin seems to have decided to use gzip in the middle of the output...
Stitching it back together:
$ dd if=db_stepup.sql.gz bs=$((0x000123a3)) count=1 > db_stepup.stitch.sql
$ dd if=db_stepup.sql.gz bs=$((0x000123a3)) skip=1 | gunzip >> db_stepup.stitch.sql
If you're looking for a way to find such offsets automatically (maybe you have more broken files like that), there's this nice little tool called binwalk which can also look for known file headers in the middle of files.
$ binwalk db_stepup.sql.gz
DECIMAL HEXADECIMAL DESCRIPTION
--------------------------------------------------------------------------------
74659 0x123A3 gzip compressed data, from Unix, NULL date: Thu Jan 1 00:00:00 1970
92556 0x1698C gzip compressed data, from Unix, NULL date: Thu Jan 1 00:00:00 1970
110522 0x1AFBA gzip compressed data, from Unix, NULL date: Thu Jan 1 00:00:00 1970
[...]
As you can see it has the same result (0x123A3 offset). It finds more than one because gzip comes in blocks / chunks (you can even concatenate multiple gzip files) and each block has the same distinct header.
| Corrupted gz file |
1,561,988,135,000 |
There are other questions asking how to enable gzip compression with wget, and lots of web pages out there telling how to do this, but I need the opposite. I'm trying to locally mirror a site, and I'm just getting the home page as a gzipped file, which in turn breaks the recursion, so I can't get the whole site.
I can gunzip that file, but that still doesn't give me a recursive download of the whole cotton-pickin' site.
How do I turn off or prevent gzipping?
EDIT: The exact command I issued is
wget --random-wait -r -p -e robots=off -U mozilla http://www.example.com --reject png,jpg,jpeg,gif --progress=dot --wait=7
|
D'oh! I figured it out. I had put
header = Accept-Encoding: gzip,deflate
in my ~/.wgetrc some time ago, assuming that it would only affect the way data was passed across the network, never thinking that wget would be unable to read the gzipped data.
In retrospect it makes sense: this is just a header that wget allows you to use (since it allows you to use any header that a browser might pass, or any you want to make up for that matter) rather than a switch built into wget, so why would anyone expect wget to automatically handle the gzipping? It would certainly be nice if it did, but there's no reason to assume that it would.
| How to disable gzip compression with wget? |
1,561,988,135,000 |
Based on this, to tar all hidden files in the current directory one can use
ls -A | egrep '^\.' | tar cvf ./test.tar -T -
However, how can one tar only all hidden directories or all hidden directories and files in the current directory?
Based on this, ls -ap | egrep "^\..*/$" | tar -zcvf hiddens.tar.gz -T - should do the trick. Yet it does not. This command simply creates an empty tar.gz archive. So tar does not see any directories at all then by this command.
|
In zsh, to tar all hidden dirs (and their contents):
tar zcf file.tar.gz .*(/)
(note that the standard .*/ is not the same as zsh's .*(/) as it would also include symlinks and with some tar implementations, because a / is appended to the paths resulting from the glob expansion, would tar them as directories (along with all their contents), not symlinks, so likely not what you want; see also the note below about . and .. in many shells).
To tar hidden dirs and regular files:
tar zcf file.tar.gz .*(/,.)
To tar all hidden files regardless of their type (dirs, regular, symlinks, fifos...)
tar zcf file.tar.gz .*
That one would also work with the fish shell, or mksh or other shells based on the Forsyth shell, but not with most other shells as those other shells do include . and .. in the expansion of that glob.
With ksh93 however, you can do:
(FIGNORE='@(.|..)'; tar zcf file.tar.gz .*)
With bash:
(shopt -s dotglob failglob; tar zcf file.tar.gz [.]*)
Or with bash 5.2 or newer:
(shopt -s globskipdots failglob; tar zcf file.tar.gz .*)
(globskipdots causing . and .. never to be returned in glob expansions, an option you'd likely want always enabled as having . and .. included is never useful).
| Tarring-gzipping only hidden directories (or files + directories) |
1,561,988,135,000 |
I need to use this command:
awk 'NR==FNR {a[$1]; next} $1 in a{print}' inputfile1.txt inputfile2.gen.gz >output.txt
Awk doesn't read the .gz file.
I have tried
zcat inputfile2.gz | awk 'NR==FNR {a[$1]; next} $1 in a{print}' inputfile1.txt inputfile2.gen.gz >output.txt
It still doesn't work.
|
Give Awk a dash argument to say to read its standard input.
zcat inputfile2.gz |
awk 'NR==FNR {a[$1]; next}
$1 in a' inputfile1.txt - >output.txt
(Notice also I took out the {print} since that is already the implied default action in Awk.)
| How to extract information using awk on .gz files without storing the uncompressed data on disk |
1,561,988,135,000 |
Is there a risk of losing files during compression if process dies? I'm gzipping big files using wildcard, but accidentally running it without screen or nohup. Is there a risk to lose source files if I cancel gzip compression during creating archive? Gzip version:
gzip 1.3.5
(2002-09-30)
Please let me know, I'm afraid of loosing data.
|
No there is no chance of losing during compression. Only when the file is complete processed is the source deleted (that is, if you don't specify -k or --keep, in which case the source is not deleted at all).
| gzip - is there a risk to lose files during compression if process dead? |
1,561,988,135,000 |
I have download this file
http://download.icu-project.org/files/icu4c/55.1/icu4c-55_1-HPUX11iv3-aCC.tgz
https://ssl.icu-project.org/files/icu4c/55.1/icu4c-bin-55_1.md5
md5sum ok
But on linux and hpux 11.31 give me this error,i have put
various commands
gunzip icu4c-55_1-HPUX11iv3-aCC.tgz
gunzip: icu4c-55_1-HPUX11iv3-aCC.tgz: invalid compressed data--format violated
gunzip -d < icu4c-55_1-HPUX11iv3-aCC.tgz| tar xvf -
gzip: stdin: invalid compressed data--format violated
tar: This does not look like a tar archive
tar: Exiting with failure status due to previous errors
tar -tvf icu4c-55_1-HPUX11iv3-aCC.tgz
gzip: stdin: invalid compressed data--format violated
tar: Child returned status 1
tar: Error is not recoverable: exiting now
|
Solution found: was a corrupted file on their server,md5 refer good because is referred to the corrupted file
I download an older release which works fine
| tgz file give invalid compressed data--format violated [closed] |
1,561,988,135,000 |
I would like to skip a large gzip file when I extract a tar file, but everytime tar starts to gunzip it to look inside it seems.
Even when I just try to peek inside tar seems to start gunzipping it, for example:
tar -tvf my.tar.gz --exclude="huge_mysql_file.gz"
Any tips how to skip entirely that file? I see gzip running when tar gets to that gz file.
|
my.tar.gz is a gzipped tar file. tar ist short for 'tape archive'.
So the file my.tar.gz has to be unzipped by gzip -d before the tar file can be read.
There is no random access to the content of a gzip file. I do not really know the gzip file format but at least there is no option for gzip to do this. The 'gzip -d' started by tar to unzip the my.tar.gz file is the gzip you can see. tar does not decompress the file huge_mysql_file.gz but it has to read (and ignore) the file huge_mysql_file.gz because this is provided by the pipe to gunzip.
if the tar file is not gzipped and you list its content like in
tar -tvf my.tar --exclude="huge_mysql_file.gz"
only the header of huge_mysql_file.gz in the archive must be read to know its size. Then tar can skip the file without reading it and continue on the following file.
| tar: exclude gzip file, and don't try to gunzip it |
1,561,988,135,000 |
I want to know whether it is possible to gunzip multiple files and rename them with one command/script.
I have a bunch of files in the format:
test.20120708191601.DAT.3599502593.gz
test.20120708201601.DAT.99932140.gz
test.20120708204600.DAT.1184686967.gz
test.20120708212100.DAT.824089664.gz
test.20120708215100.DAT.1286044098.gz
test.20120708222100.DAT.1414234861.gz
I need to gunzip them and remove everything after the .DAT, to be in the format:
test.20120708191601.DAT
test.20120708201601.DAT
test.20120708204600.DAT
test.20120708212100.DAT
test.20120708215100.DAT
test.20120708222100.DAT
|
Try this:
for file in *.gz; do
gunzip -c "$file" > "${file/.DAT*/.DAT}"
done
The approach uses gunzip's option to output the uncompressed stream to standard output (-c), so we can redirect it to another file, without a second renaming call. The renaming is done on the filename variable itself, using bash substitution (match any globbing pattern .DAT* and replace it with .DAT). The loop itself just iterates over files in the current directory with names ending with .gz.
| Gunzip multiple files and rename them |
1,561,988,135,000 |
I've downloaded some .csv files from the OECD Stats website, since I need them to plot some graphs with gnuplot. When I open them with File Roller 3.4.1 (the default program to handle archive files), there's a file that seems empty (0 byte). When I try on the terminal, I get:
gunzip Financial\ Indicators\ –\ Stocks.gz
gzip: Financial Indicators – Stocks.gz: invalid compressed data--length error
gzip: Financial Indicators – Stocks.gz has more than one entry -- unchanged
So the file seems corrupted, but I sent it to a friend who uses Windows. He extracted the file on his computer and sent me the output: it's a zip files which, in turn, contains two .csv files. So the file is not corrupt, there must be a problem with the packages used to handle them. Any suggestion?
|
If it is a zip then the file extension is wrong, and you can not use gunzip. If it contains 2 files then it is not a gzip file, gunzip works with single files, it just compresses, sometimes we use it with tar to combine file. It may be zip or tarred zip .taz
Use file command to find out.
If zip
You need unzip.
Also if zip change the file extension, as some tools may use this to detect file type.
| Error with a .gz file decompression in Mint, but works perfectly in Windows, so the file is not corrupt |
1,561,988,135,000 |
I am using the below command to find files which are greater than particular size and zip it. How can I modify the below command to include the timestamp at the end of the file?
find . -type f -name "*querry_match*" -size +550000000c -exec gzip {} \;
Expectation,
Before zipping: querry_match_file1
After zipping: querry_match_file1.`date +"%m-%d-%Y-%H:%M:%S"`.z
querry_match_file1.09-24-2015-02:50:56.z
|
If by timestamp you mean "now", rather than the time of the file, you can try something like this:
find . -type f -name "querry_match" -size +550000000c \
-exec bash -c 'gzip --suffix $(date +".%m-%d-%Y-%H:%M:%S.z") {}' \;
where the date command is run separately for each file. If your want
the same date on all files, as at the start of the find, simply do:
find . -type f -name "querry_match" -size +550000000c \
-exec gzip --suffix $(date +".%m-%d-%Y-%H:%M:%S.z") {} \;
| Append timestamp while zipping a file |
1,561,988,135,000 |
I have a compressed tarball (eg foo.tar.gz) that I wish to extract all the files from, but the files within the tarball are not compressed. That is to say, the contents of foo.tar.gz are un-compressed txt files.
There is not enough space on my filesystem to extract the files directly, so I wish to extract these files and immediately compress them as they are written to the disk. I can't simply extract the files and then gzip the extracted files because, as I've said, there's not enough space on the filesystem. I would also like to ensure that the original filenames, including their directories are faithfully preserved on disk. So if one of the files in the tarball is /a/b/c/foo.txt, at the end of the process I would like to have /a/b/c/foo.txt.gz
How can I accomplish this?
|
It won't be fast, especially for a large tarball with lots of files, but in bash you can do this:
tar -tzf tarball.tgz | while IFS= read -r file; do
tar --no-recursion -xzf tarball.tgz -- "$file"
gzip -- "$file"
done
The first tar command extracts the names of the files in the tarball, and passes those names to a while read ... loop. The file name is then passed to a second tar command that extracts just that file, which is then compressed before the next file is extracted. The --no-recursion flag is used so trying to extract a directory doesn't extract all the files under that directory, which is what tar would normally do.
You'll still need enough free space to store somewhat more than the original size of the compressed tarball.
| use tar to extract and immediately compress files from tarball |
1,561,988,135,000 |
I have a directory where there are multiple folders, each folder contains multiple .gz files with the same zipped file name "spark.log". How can I unzip all of them at once and rename them like the gz file?
My data looks like this
List of folders
A
B
C
D
In every of them there are files as
A
spark.log.gz
spark.log.1.gz
spark.log.2.gz
spark.log.3.gz
B
spark.log.gz
spark.log.1.gz
spark.log.2.gz
spark.log.3.gz
C
spark.log.gz
spark.log.1.gz
spark.log.2.gz
spark.log.3.gz
D
spark.log.gz
spark.log.1.gz
spark.log.2.gz
spark.log.3.gz
in each of the gz file contains spark.log, I'd like to be able to unzip and rename them according to their gz name. For example:
spark.log.1.gz -> spark.log.1.log
|
While gzip does or can store the original name, which you can reveal by running gzip -Nl file.gz:
$ gzip spark.log
$ mv spark.log.gz spark.log.1.gz
$ gzip -l spark.log.1.gz
compressed uncompressed ratio uncompressed_name
170 292 51.4% spark.log.1
$ gzip -lN spark.log.1.gz
compressed uncompressed ratio uncompressed_name
170 292 51.4% spark.log
gunzip will not use that for the name of the uncompressed file unless you pass the -N option and will just use the name of the gzipped file with the .gz suffix removed.
You may be confusing it with Info-ZIP's zip command and its related zip format which is a compressed archive format while gzip is just a compressor like compress, bzip2, xz...
So you just need to call gunzip without -N on those files:
gunzip -- */spark.log*.gz
And you'll get spark.log, spark.log.1, spark.log.2... (not spark.log.1.log which wouldn't make sense, nor spark.1.log, which could be interpreted as a log file for a spark.1 service as opposed to the most recent rotation of spark.log).
Having said that, there's hardly ever any reason to want to uncompress log files. Accessing the contents is generally quicker when they are compressed. Modifying the contents is potentially more expensive, but you generally don't modify log files after they've been archived / rotated. You can use zgrep, vim, zless (even less if configured to do so) to inspect their contents. zcat -f ./*.log*(nOn) | grep... if using zsh to send all the logs from older to newer to grep, etc.
| gunzip multiple gz files with same compressed file name in multiple folders |
1,561,988,135,000 |
I have an Operating system image of size 2.5G.
I have a device with a limited size. Thus I was looking for the best possible solution for providing the compression.
Below are the commands and results of their compression:
1.tar with gzip:
tar c Os.img | gzip --best > Os.tar.gz
This command returned an image of 1.3G.
2.Xz only:
xz -z -v -k Os.img
This command returned an image of 1021M.
3.Xz with -9:
xz -z -v -9 -k Os.img
This command returned an image of 950M.
4.tar with Xz and -9:
tar cv Os.img | xz -9 -k > Os.tar.xz
This command returned an image of 950M.
5.tar with Xz -9 and -e:
xz -z -v -9 -k -e Os.img
This command returned an image of 949M.
6.lrzip:
lrzip -z -v Os.img
This command returned an image of 729M.
Is there any other possible best solution or command line tool ( preferable ) for the compression?
|
You may try out zstandard:
Highest "standard" compression option:
zstd -19 -c Os.img >Os.img.zstd
Highest ultra compression option:
zstd -22 --ultra -c Os.img >Os.img.zstd
Your mileage may vary, but if compression time is not important, but size matters, than zstd is your friend.
| Best compression for operating system image |
1,561,988,135,000 |
I am doing a daily *.tar.gz-backup of graphite whisper databases (/var/lib/graphite) by using this script. The backup destination is a windows share mounted before doing the backup with CIFS.
The script starts daily at 3AM via cronjob.
Mysteriously the *.tar.gz-files are growing by ~12MB daily though the size of the actual directory does not change.
Check the screenshot below. You can see backup from 5 days (30.09.21017 to 04.10.2017) as packed *.tar.gz with growing size, the decompressed *.tar-archive as well as the unpacked folders with the size staying the same.
I think it has something to do with the time while packing it but I can not figure out what is the issue. Furthermore, I am backing up a few other directories (e.g.: /etc/phpMyAdmin) and they do not grow. It is only the /var/lib/graphite backup that is growing.
$ uname -a
Linux ******** 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u2 (2017-06-26) x86_64 GNU/Linux
I hope you understand my problem and someone can help me.
Thanks in advance.
|
According to Graphite
documentation,
"Whisper is a fixed-size database", which explains why your
uncompressed files are the same size every day (even though,
presumably, new data is collected all the time). This also explains
why the uncompressed tar archives are all the same size.
The reason why the compressed archives grow in size is probably that,
as actual data is written into Whisper files, it replaces what were
previously zeros, making those files less compressible.
It looks like you are, in fact, creating about 12MB of actual data
every day, and should expect your archives to grow in the same way,
until the retention settings kick in and start aggregating older data
points. At this point, the size of your compressed archives should
stop growing.
If you want to check for that, you can search for the largest files
from your archives, and see how well they compress individually (using
gzip).
| *.tar.gz backup growing |
1,561,988,135,000 |
Currently I use three steps to gzip some static assets and then use s3cmd to an S3 bucket (technically it's a Digital Ocean Spaces bucket). Here's what I do:
$ find . -type f -name '*.css' | xargs -I{} gzip -k -9 {}
$ find . -type f -name '*.css.gz' | xargs -I{} s3cmd put --acl-public --add-header='Content-Encoding: gzip' {} s3://mybucket/assets/{}
But then I have to manually change all of the extensions in my bucket to remove the .gz extension.
Is there a way that I won't have to manually do step 3? I'd love to know if it's possible in step 2 to remove the .gz extension in the destination. I do want to keep the original files on my server though, so that's a deal breaker.
|
You can use the -exec action in find so that you can do shell string manipulation on the filename. The parameter expansion "${var%.*}" can be used to remove the extension. Below is an example.
find . -type f -name '*.css.gz' -exec bash -c 's3cmd put --acl-public --add-header="Content-Encoding: gzip" "$1" "s3://mybucket/assets/${1%.*}"' -- {} \;
| Find files then move them and rename them at the same time? |
1,561,988,135,000 |
I've got a file.tar.gz that I create and backup to Amazon S3. It's gotten bigger than Amazon's limits and thus has not been backing up.
This is my current command:
tar -cpzf file.tar.gz directory/
Is it better, with TAR's --exclude functionality to skip a directory where the bulk of files live or TAR and Gzip the entire directory and then use something like 'split' to break it up into smaller chunks?
|
If you are splitting, split according the directory structure. Much more portable and much easier to decompress. It's easy to break a multiply split file (it's all interdependent). On the other hand, if you have 10 archives instead of one, you can just untar all at once with globbing. No harder than one file.
| TAR: better to skip directory or use split |
1,561,988,135,000 |
In Debian you can use zgrep to grep through a gunzipped archive file. The reason for making a gunzip file is easy enough, files such as changelogs are vast which can be highly compressed. The issue is with zgrep you only get a specific line and no above or below info. to give contextual info. on the change itself. An example to illustrate -
usr/share/doc/intel-microcode$ zgrep Fallout changelog.gz
* Implements MDS mitigation (RIDL, Fallout, Zombieload), INTEL-SA-00223
* Implements MDS mitigation (RIDL, Fallout, Zombieload), INTEL-SA-00223
Now as can be seen it seems my chip was affected by RIDL, Fallout and Zombieload bugs which seem to have been fixed by a software patch INTEL-SA-00223 which is mentioned but as can be seen it's pretty incomplete.
The way out is to use zless and then / RIDL or any of the other keywords and then you know but am wanting to know if there is any other way or that's the only workaround ? FWIW did come to know that the bugs were mitigated on 2019-05-14 where Intel made software patches affecting these and various other issues on that date. I did try using 'head' and 'tail' using pipes but neither of them proved to be effective.
|
Zutils (packaged in Debian) provides a more capable version of zgrep which supports all the usual contextual parameters:
$ zgrep -C3 Fallout /usr/share/doc/intel-microcode/changelog.Debian.gz
* New upstream microcode datafile 20190618
+ SECURITY UPDATE
Implements MDS mitigation (RIDL, Fallout, Zombieload), INTEL-SA-00223
CVE-2018-12126, CVE-2018-12127, CVE-2018-12130, CVE-2019-11091
for Sandybridge server and Core-X processors
+ Updated Microcodes:
--
* New upstream microcode datafile 20190514
+ SECURITY UPDATE
Implements MDS mitigation (RIDL, Fallout, Zombieload), INTEL-SA-00223
CVE-2018-12126, CVE-2018-12127, CVE-2018-12130, CVE-2019-11091
+ New Microcodes:
sig 0x00030678, pf_mask 0x02, 2019-04-22, rev 0x0838, size 52224
You can install it with sudo apt install zutils.
| How to use zgrep to find out what line number or give some contextual info. surrounding a .gz file |
1,561,988,135,000 |
I have a .vcf.gz file, with the following aspect:
#CHROM POS ID REF ALT
chr1 10894 chr1:10894:G:A G A
chr1 10915 chr1:10915:G:A G A
chr1 10930 chr1:10930:G:A G A
I want to modify the CHROM column to remove 'chr' and to replace it with nothing, so I want to get a result like the following:
#CHROM POS ID REF ALT
1 10894 chr1:10894:G:A G A
1 10915 chr1:10915:G:A G A
1 10930 chr1:10930:G:A G A
Therefore, I wrote the following command line:
zcat input.vcf.gz | sed 's/^chr//' > output.vcf.gz
and it worked. The problem is that I want to save the output file as a zipped one, with the vcf.gz extension. Even if I wrote 'output.vcf.gz', the output file is not zipped.
How can I modify a zipped file and then save it as a zipped file again?
Many thanks!
|
Simply add gzip in the pipe:
zcat input.vcf.gz | sed 's/^chr//' | gzip > output.vcf.gz
| How to modify a gzipped file with sed and then zip again the file? |
1,561,988,135,000 |
I need to delete the last line of gz file without uncompressing.
The file has 500 lines.
How can I do that?
I have tried:
gzip -dc "$files" | tail -500 | gzip -c > "$files".tmp
But It doesn´t works.
|
You can't modify a compressed file without decompressing it.
At the very least, to delete all text after the 499th line, you have to decompress the first 499 lines to find where the 499th line ends. If you want to delete the last line regardless of how many lines there are, you need to decompress the whole file to identify where the last line starts.
There is no shortcut because the file is compressed. The encoding of a character depends on all the previous characters — the basic principle of gzip compression is to use shorter bit sequences for character sequences that have been encountered previously, and slightly longer bit sequences for character sequences that haven't been encountered yet, thus yielding a smaller file when character sequences are repeated. There's no way to determine that a particular character is a line break without examining all the previous characters.
Your attempt, which decompresses the file, works on the decompressed stream, and recompresses to another file, is on the right track. You just need the correct command to truncate the file: tail -500 keeps the last 500 lines, which isn't what you want. Use head -n 499 to keep the first 499 lines, or head -n -1 to remove the last line. Not all systems support a negative argument for head; if yours doesn't, you can use sed '$d' instead.
gunzip <"$file" | head -n -1 | gzip >"$file".tmp
mv -- "$file".tmp "$file"
Note that you can't directly write to file: gunzip <"$file" | … | gzip >"$file" would start overwriting the file while gunzip is still reading it. The commands in a pipeline are executed in parallel. While it's possible to avoid creating a temporary file, it's a bad idea, because any way to do that would result in a truncated file if the command is interrupted, so I won't discuss how to do it.
In theory, it would be possible to truncate a gzipped file by:
uncompressing it in memory to determine the position where you want to truncate it;
truncating the file to remove all data after the last character to keep;
overwrite the last few bytes to correctly encode the last character;
overwrite a few bytes at the beginning to reflect the new file size.
However this can't be done with standard tools, it would take some custom programming, and it would leave an invalid file if it was interrupted.
| Delete last line of gz file |
1,561,988,135,000 |
Using awk, to this table I want to add a column where first row is "INFO" and rest of the rows are all "1".
$ gunzip -c foo.gz | head
SNPID CHR BP Allele1 Allele2 Freq1 Effect StdErr P.value TotalN
rs1000033 1 226580387 t g 0.8266 -0.0574 0.0348 0.09867 17310
rs1000050 1 162736463 t c 0.8545 0.0654 0.0461 0.1564 10864
where
gunzip -c foo.gz | head | cat -A
SNPID^ICHR^IBP^IAllele1^IAllele2^IFreq1^IEffect^IStdErr^IP.value^ITotalN^M$
rs1000033^I1^I226580387^It^Ig^I0.8266^I-0.0574^I0.0348^I0.09867^I17310^M$
rs1000050^I1^I162736463^It^Ic^I0.8545^I0.0654^I0.0461^I0.1564^I10864^M$
Since it's a .gz file I used
gunzip -c foo.gz | \
awk 'BEGIN {FS="\t"; OFS="\t"} NR == 1 {print $0 OFS "INFO"} NR > 1 {print $0 OFS "1"}' | \
gzip > foo.V2.gz
For some reason this seems to change me a column name but not the expected column at the end.
$ gunzip -c foo.V2.gz | head
SNPID INFO BP Allele1 Allele2 Freq1 Effect StdErr P.value TotalN
--------^
rs1000031 1 226580387 t g 0.8266 -0.0574 0.0348 0.09867 17310
rs1000051 1 162736463 t c 0.8545 0.0654 0.0461 0.1564 10864
Weirdly enough, when I cat -A it the column appears to be where it should be.
$ gunzip -c foo.V2.gz | head | cat -A
SNPID^ICHR^IBP^IAllele1^IAllele2^IFreq1^IEffect^IStdErr^IP.value^ITotalN^M^IINFO$
----------------------------------------------------------------------------^
rs1000033^I1^I226580387^It^Ig^I0.8266^I-0.0574^I0.0348^I0.09867^I17310^M^I1$
rs1000050^I1^I162736463^It^Ic^I0.8545^I0.0654^I0.0461^I0.1564^I10864^M^I1$
I'd like to know,
what's happening here?
can I trust gunzip -c foo.V2.gz | head or gunzip -c foo.V2.gz | head | cat -A now??
how to get my expected output using gunzip -c foo.V2.gz | head
SNPID CHR BP Allele1 Allele2 Freq1 Effect StdErr P.value TotalN INFO
rs1000033 1 226580387 t g 0.8266 -0.0574 0.0348 0.09867 17310 1
rs1000050 1 162736463 t c 0.8545 0.0654 0.0461 0.1564 10864 1
Note, I'm using a config script defining SNPID=1; CHR=2; ... where I am depending on the column numbers I'm specifying being correct for the subsequent analyses.
|
As already mentioned you have DOS line endings.
See why-does-my-tool-output-overwrite-itself-and-how-do-i-fix-it for a description of the issue and possible solutions, for example using any awk:
gunzip -c foo.gz |
awk -v OFS='\t' '{sub(/\r$/,""); print $0, (NR>1 ? 1 : "INFO")}' |
gzip > foo.V2.gz
You could use RS="\r\n" but a multi-char RS is a GNU awk extension that's recently been adopted by 1 or 2 other awk variants. With any other POSIX-compliant awk setting RS="\r\n" will be treated the same as if you set RS="\r" since per POSIX RS can only be a single literal character. It'll also fail on systems where the underlying C primitives strip the \r from the end of lines before awk sees them so RS="\r?\n" is a bit more robust. With any awk you can leave RS as it's default value of \n and add {sub(\r$/,"")} as the first statement of the script.
I tidied up a couple of other things in your script too, e.g. removing the code setting variables you don't need or already have that value, changing your 2 print statements into 1, using OFS as designed, and getting rid of the unnecessary escapes at the end of lines after a pipe symbol.
| awk appends column in .gz file as seen with cat -A, but changes column name in regular output |
1,561,988,135,000 |
I am trying to extract one file out of the whole tar.gz and it's not working. Below are the commands i tried to simulate the issue
mkdir test
touch test/version.txt
echo "1.0.2" > test/version.txt
tar zcvf rootfs.tar -C test .
gzip -f -9 -n -c rootfs.tar > rootfs.tar.gz
tar xf rootfs.tar.gz version.txt
tar: version.txt: Not found in archive
tar: Exiting with failure status due to previous errors
Can you please provide the reason, there is version.txt file present in the above tar.gz
|
rootfs.tar is a tar archive compressed with gzip. rootfs.tar.gz is a tar archive compressed with gzip twice. Tar seems to be confused by the double compression and treats the file as an empty archive instead of reporting an error.
Compressing twice is pointless, so remove this extra gzip step. To avoid confusion, call the compressed archive rootfs.tar.gz instead of rootfs.tar. And since the path you're passing to tar starts with ., you need to pass a path starting with ./ when extracting: tar does not treat ./ as a no-op in paths.
mkdir test
touch test/version.txt
echo "1.0.2" > test/version.txt
tar zcvf rootfs.tar.gz -C test .
tar xf rootfs.tar.gz ./version.txt
If you want to avoid the ./ prefix in file names, you can use --transform when creating the archive.
tar zcvf rootfs.tar.gz --transform='s!^\./!!' -C test .
tar xf rootfs.tar.gz version.txt
| tar xf command giving file not found |
1,561,988,135,000 |
Check out:
data/tmp$ gzip -l tmp.csv.gz
compressed uncompressed ratio uncompressed_name
2846 12915 78.2% tmp.csv
data/tmp$ cat tmp.csv.gz | gzip -l
compressed uncompressed ratio uncompressed_name
-1 -1 0.0% stdout
data/tmp$ tmp="$(cat tmp.csv.gz)" && echo "$tmp" | gzip -l
gzip: stdin: unexpected end of file
Ok apparently the input is not the same, but it should have been, logically. What am I missing here? Why aren't the piped versions working?
|
This command
$ tmp="$(cat tmp.csv.gz)" && echo "$tmp" | gzip -l
assigns the content of tmp.csv.gz to a shell variable and attempts to use echo to pipe that to gzip. But the shell's capabilities get in the way (null characters are omitted). You can see this by a test-script:
#!/bin/sh
tmp="$(cat tmp.csv.gz)" && echo "$tmp" |cat >foo.gz
cmp foo.gz tmp.csv.gz
and with some more work, using od (or hexdump) and looking closely at the two files. For example:
0000000 037 213 010 010 373 242 153 127 000 003 164 155 160 056 143 163
037 213 \b \b 373 242 k W \0 003 t m p . c s
0000020 166 000 305 226 141 157 333 066 020 206 277 367 127 034 012 014
v \0 305 226 a o 333 6 020 206 277 367 W 034 \n \f
0000040 331 240 110 246 145 331 362 214 252 230 143 053 251 121 064 026
331 240 H 246 e 331 362 214 252 230 c + 251 Q 4 026
drops a null in the first line of this output:
0000000 037 213 010 010 373 242 153 127 003 164 155 160 056 143 163 166
037 213 \b \b 373 242 k W 003 t m p . c s v
0000020 305 226 141 157 333 066 020 206 277 367 127 034 012 014 331 240
305 226 a o 333 6 020 206 277 367 W 034 \n \f 331 240
0000040 110 246 145 331 362 214 252 230 143 053 251 121 064 026 152 027
H 246 e 331 362 214 252 230 c + 251 Q 4 026 j 027
Since the data changes, it is no longer a valid gzip'd file, which produces the error.
As noted by @coffemug, the manual page points out that gzip will report a -1 for files not in gzip'd format. However, the input is no longer a compressed file in any format, so the manual page is in a sense misleading: it does not categorize this as error-handling.
Further reading:
How do I use null bytes in Bash?
Representing/quoting NUL on the command line
@wildcard points out that other characters such as backslash can add to the problem, because some versions of echo will interpret a backslash as an escape and produce a different character (or not, depending on the treatment of escapes applied to characters not in their repertoire). For the case of gzip (or most forms of compression), the various byte values are equally likely, and since all nulls will be omitted, while some backslashes will cause the data to be modified.
The way to prevent this is not to try assigning a shell variable the contents of a compressed file. If you want to do that, use a better-suited language. Here is a Perl script which can count character-frequencies, as an example:
#!/usr/bin/perl -w
use strict;
our %counts;
sub doit() {
my $file = shift;
my $fh;
open $fh, "$file" || die "cannot open $file: $!";
my @data = <$fh>;
close $fh;
for my $n ( 0 .. $#data ) {
for my $o ( 0 .. ( length( $data[$n] ) - 1 ) ) {
my $c = substr( $data[$n], $o, 1 );
$counts{$c} += 1;
}
}
}
while ( $#ARGV >= 0 ) {
&doit( shift @ARGV );
}
for my $c ( sort keys %counts ) {
if ( ord $c > 32 && ord $c < 127 ) {
printf "%s:%d\n", $c, $counts{$c} if ( $counts{$c} );
}
else {
printf "\\%03o:%d\n", ord $c, $counts{$c} if ( $counts{$c} );
}
}
| gzip same input different output |
1,308,650,780,000 |
I'm trying to find just the files' names which contain an specific string. The files are compressed (.gz).
I don't have zgrep installed and can't install it. Therefore I can't use the -l option.
I've tried using gzip and gunzip with the -c option and pipping to grep -l but that did not work, I have also used zcat but that also did not work. Any clue?
(Note: the OS is Solaris 10).
|
You can do the work of zgrep manually. Since you only want the file names, use grep just to test the presence of the pattern, and print out the file name if the pattern is found.
#!/bin/sh
pattern=$1; shift
PATH=`getconf PATH`:$PATH # needed on Solaris 10 and earlier
# to get a standard grep
export PATH
found=0
for x do
if case "$x" in
*.gz|*.[zZ]) <"$x" gzip -dc | grep -q -e "$pattern";;
*) <"$x" grep -q -e "$pattern";;
esac
then
found=1
printf '%s\n' "$x"
fi
done
if [ $found -eq 0 ]; then exit 1; fi
To be run as:
that-script 'pattern' file1 file2.gz file3.Z file.*.gz ...
A few notes specific to you running Solaris 10 (also applies to earlier versions and in some respects to Solaris 11 as well).
on those systems, /bin/sh is a Bourne shell as opposed to a standard POSIX sh. You've got the choice of either changing your she-bang to #! /usr/xpg4/bin/sh - to get a standard sh, or restrict yourself to the ancient Bourne syntax like we do here (so no $(...), no case $x in (x)...) (Solaris 11 is now using a POSIX compliant shell for its /bin/sh (ksh93)).
on those systems, zcat only handles .Z files as compressed by compress as they were in the olden days. You need to invoke gzip for .gz files.
By default, you don't necessarily get standard utilities. For instance, the default grep in /usr/bin is an ancient one that doesn't support the standard -q option. To get the standard utilities, you need to update $PATH with the paths where to find the standard utilities (as output by getconf PATH).
If you want to display both the archive member name and the line number or content, you'll need to get the line data from grep and the member name from the script. Remove the -q option from the grep invocation, and postprocess its content.
#!/bin/ksh
pattern=$1; shift
export PATH="$(getconf PATH):$PATH" # needed on Solaris 10 and earlier
# to get a standard grep
found=0
for x do
case "$x" in
*.gz|*.[zZ]) <"$x" gzip -dc | grep -n -e "$pattern";;
*) <"$x" grep -n -e "$pattern";;
esac | {
filename=$x awk '{print ENVIRON["filename"] ":" $0; found=1}
END {exit(!found)}' && found=1
}
done
if [ $found -eq 0 ]; then exit 1; fi
| How to search for text within compressed files and get just the files name |
1,308,650,780,000 |
I'm new to the gzip command, so I googled some commands to run so I can gzip an entire directory recursively. Now while it did that, it converted each of my files to the gzip format and added .gz to the end of each of their filenames.. is there a way to ungzip them all, one by one?
|
There are essentially two options for going through the whole directory tree:
Either you can use find(1):
find . -name '*.gz' -exec gzip -d "{}" \;
or if your shell has recursive globbing you could do something like:
for file in **/*.gz; do gzip -d "$file"; done
| I accidentally GZIPed a whole bunch of files, one by one, instead of using tar(1). How can I undo this mess? |
1,308,650,780,000 |
Is there a difference between zcat/gunzip -c and gunzip apart from the obvious one that one outputs it to the terminal and the other decompresses it to a file? I'm looking from a CPU utilization perspective. One of our clients mentioned that they would like to have a utility that can view the contents of zipped file without decompressing them. They also gave the example of gzcat, but I read up about gzcat and could not find definitive evidence that it is more efficient than gunzip . I could possibly see gzcat using lesser IO than gunzip. Are there any other improvements ? Any help is appreciated.
|
gunzip and gzcat are both convenience aliases for gzip -d and gzip -cd respectively. In fact, if you look you will see that they are implemented as shell scripts that call gzip with the appropriate options. So there is zero difference in CPU utilization or other performance characteristics.
The I/O difference could potentially find would depend upon whether or not output is being written to a file. But that doesn't really depend on how gzip is called since, for example, all of the following commands write their output to a file:
gunzip file.gz
gzcat file.gz >file
gzip -cd file.gz >file
If you don't write output to a file but to a pipe instead, for example like this:
gzcat file.gz | less
gzip -cd file.gz | mail root
Then you might notice a difference, but it depends on a lot of things such as how big the file is, how often and for how long the pipeline stalls, the amount of buffer memory in your system, etc...
| Is there a difference between gunzip -c and gunzip in terms of system utilization? |
1,308,650,780,000 |
How can I unzip a file from another folder?
gunzip -d /home/tomcat_dev/HN/radd/input.txt.gz
unzips that file to the same folder. But if I want to bring it to my folder? When I do this:
gunzip -d /home/tomcat_dev/HN/radd/o.txt.gz /home/tomcat_dev/
it gives me a: gunzip: /home/tomcat_dev/ is a directory -- ignored
|
It is as simple as below:
gunzip -c filepath.gz > anotherDirectoryPath
More on gunzip, also refer to the man page.
| unzip a file from another folder |
1,308,650,780,000 |
I have a file I need to split into smaller sizes (<24M when zipped)
Heres the file:
498775505 Mar 8 00:08 test.file
I split it:
split -b 125000k test.file test.file.
Now I have even sized files (apart from the last file which is fine)
476M Mar 8 00:08 test.file
123M Mar 8 00:09 test.file.aa
123M Mar 8 00:09 test.file.ab
123M Mar 8 00:09 test.file.ac
110M Mar 8 00:09 test.file.ad
But when I gzip these files, they do not zip down evenly
gzip test.file.a*
476M Mar 8 00:08 test.file
27M Mar 8 00:09 test.file.aa.gz
23M Mar 8 00:09 test.file.ab.gz
22M Mar 8 00:09 test.file.ac.gz
20M Mar 8 00:09 test.file.ad.gz
Can somebody explain what is happening here with gzip?
(This is more out of curiosity as I can just split them into smaller amounts to get them under 24M, just wondering how gzip works here)
|
The split files contain different parts of the original (full) file, they probably have different contents. (The only way they would be identical would be for the original to be highly repetitive.)
Different contents result in different compression results. Stuff like aaaaaaaaaa is easier to compress than wekfsiorlm. In files of 123 MB, there's quite a lot of space for one file to be more "random"-looking (more difficult to compress) than another, even if it's not as extreme as my example here.
If you want to control the sizes of the compressed result files, you could split the original into smaller pieces, compress them individually, and then concatenate the compressed parts together, up to the desired size limit. (I can't think of a trivial way to do that, though.)
If the input to gzip -d contains multiple compressed gzip "files", it decompresses them all. Though this would lose some compression performance, since the splitting causes artificial breaks to the data.
| why does gzip not create files of equal size? |
1,308,650,780,000 |
when we uncompressed tar.gz file with tar xfz
tar xfz redhatPkgInstallation.tar.gz
we get the following errors
gzip: stdin: decompression OK, trailing garbage ignored
tar: Child returned status 2
tar: Error is not recoverable: exiting now
failed while , error 2
is it possible to check tar.gz files propriety before we untar the file?
goal - check/validate the tar.gz file before un-tar ,
|
Importing from stackoverflow user John Boker's answer, one can do it in several ways:
To test the gzip file is not corrupt:
gunzip -t file.tar.gz
To test the tar file inside is not corrupt:
gunzip -c file.tar.gz | tar t > /dev/null
| how to check/validate the tar.gz files before un-tar |
1,308,650,780,000 |
I have a series of gzip files which I wish to store more efficiently using xz, without losing traceability to a set of checksums of the gzip files.
I believe this amounts to being able to recreate the gzip files from the xz files, though I'm open to other suggestions.
To elaborate... If I have a gzip file named target.txt.gz, and I decompress it to target.txt and discard the compressed file, I want to exactly recreate the original compressed file target.txt.gz. By exactly, I mean a cryptographic checksum of the file should indicate that it is exactly the same as the original.
I initially thought this must be impossible, because a gzip file contains metadata such as original file name and timestamp, which might not be preserved upon decompression, and metadata such as a comment, the source operating system, and compression flags, which are almost certainly not preserved upon decompression.
But then I thought to modify my question: is there a minimal amount of header information that I could extract from the gzip file that, in combination with the uncompressed data, would allow me to recreate the original gzip file.
And then I thought that the answer might still be no due to the existence of tools such as Zopfli and 7-zip, which can create gzip-compatible streams which are better (therefore different) from the standard gzip program. As far as I am aware, the gzip file format does not record which of these compressors created it.
So my question becomes: are there other options I haven't thought of that might mean I can achieve my goal as set out in the first paragraph after all?
|
This may be helpful: https://github.com/google/grittibanzli
Grittibanzli is a tool to compress a deflate stream to a smaller file, which can be decoded to the original deflate stream again. That is, it compresses not only the data inside the deflate stream, but also the deflate-related information such as LZ77 symbols and Huffman trees, to reproduce a gzip, png, ... file exactly.
| Can I recreate a gzip file exactly, given the original uncompressed file? |
1,308,650,780,000 |
I am using tar with pigz to compress a folder and save a backup. Size of this folder ~250 GB or more. This folder has variety of content including numerous text and log files, ISOs and zip files in many different sub-folders.
Full compression of this folder takes around 1 hour (or more sometimes). At this moment I use this in a script.
tar -cf - <data_folder> | pigz -1 > <output_file>.tar.$
I want to reduce the compression time by excluding compression on ISOs and zip files. I want them (ISOs and zip files) to be included in the gzip file as such (uncompressed).
My question is this: Is it possible to selectively compress files based on type, and still include uncompressed files in the gzip output? How to try this out?
|
No, you can't. At least not directly.
tar doesn't do any compression. It merely reads part of the (virtual) file system, and generates one cohesive stream from it. This stream is then often passed to a compression tool/library, for instance gzip/libz. The compression part does not see or even know about individual files. It just compresses the stream generated by tar. Therefore you cannot add selective compression to your current approach.
What you can do is incrementally build the tar archive, by compressing every file individually and then adding it to the tar archive. By doing so you can choose to add (for example) iso images uncompressed to the archive. Note however, that the tar archive itself will not be compressed. Consequently after untaring it, you would also have to uncompress each file individually, where appropriate.
How much time do you actually lose by compressing isos and zip files? Seeing as tar | pigz > <file> is stream processing I'd guess you are not loosing that much time. There are blocks written to disk, while the next blocks are being compressed, while the stream is being built. It is happening in parallel.
Maybe you can optimize your strategy:
You could put all iso and zip files into dedicated directories and then build your archive in three steps: tar&compress the rest, add iso dir, add zip dir. The resulting archive still needs a lengthy extraction procedure of untaring the outer archive and then uncompressing and untaring the inner archive. Yet, this is more feasible than uncompressing every individual file.
Or you tune the commands: Does it have to be a tar archive of a file system or could you use dd to backup the entire partition? Backing up the entire partition has the advantage of continuous reads from the disk(s) which may be faster than working with the file system. I am sure you can tune pigz to work with bigger chunks, which should give you a speed up, if iso and zip files are your problem. Also, you could add some buffering (e.g. mbuffer), before writing the result to disk to further optimize media access.
| Compressing a folder but do not compress specific file types but include them in the gz file |
1,308,650,780,000 |
I'm writing a shell script to compress and backup certain files. This will compress and move large files with sizes up to 4 GB.
I'm having trouble with this line:
gzip < $filelocation > $backuplocation
Where
$filelocation = /home/user/image.img
$backuplocation = /home
And I will add a similar line to decompress the file
gunzip < $filelocation > $backuplocation
Now it doesn't work for some reason.
I try
gunzip $filelocation > $backuplocation
Will I be able to pipe it and move the compressed file instead into the directory?
|
You don't make a backup with gzip into a directory, but into a different file:
gzip < file.in > file.out
or in your case:
filelocation=/home/user/image.img
backuplocation=/home/image.img.gz
gzip < "$filelocation" > "$backuplocation"
based on that you can do:
gunzip < "$backuplocation" > /some/new/location/image.img
| Bash: using both way redirections in a shell script |
1,308,650,780,000 |
On AIX 6.1 without GNU Tool installed I have a .tar.gz file.
On Ubuntu I would use the very useful command :
tar xzf myfile.tar.gz
but in this case the z switch is not possible.
I do remeber a long time ago a way to use gzip | tar but I really can't remember.
Can someone help me with that with a command that could work on almost every system even oldfashion unix ?
|
Make gzip feed the uncompressed tar archive to tar:
gunzip < myfile.tar.gz | tar xvf -
(note that it's what GNU tar actually does internally, except that it will also report gunzip errors in its exit status).
Use gzip -d if gunzip is not available. You might also have a zcat, but that one may only work with .Z files (compressed with compress instead of gzip).
| portable command to unzip and untar on non GNU and old unix |
1,308,650,780,000 |
I am looking to speedup gzip process. (server is AIX 7.1)
More specificly, the current implementation is with gzip *.txt and it takes up to 1h to complete.
(file extractions are quite big and we got a total of 10 files)
Question: Will it be more efficient to run
pids=""
gzip file1.txt &
pids+=" $!"
gzip file2.txt &
pids+=" $!"
wait $pids
than
gzip *.txt
Is the gzip *txt behavior is the same in terms of parallelism, cpu consumption etc as the gzip in the background (&) or the other option will be more efficient?
|
Don't reinvent the wheel. You can use pigz, a parallel implementation of gzip which should be in your distributions repositories. If it isn't, you can get it from here.
Once you've installed pigz, use it as you would gzip:
pigz *txt
I tested this on 5 30M files created using for i in {1..5}; do head -c 50M /dev/urandom > file"$i".txt; done:
## Non-parallel gzip
$ time gzip *txt
real 0m8.853s
user 0m8.607s
sys 0m0.243s
## Shell parallelization (same idea as yours, just simplified)
$ time ( for i in *txt; do gzip $i & done; wait)
real 0m2.214s
user 0m10.230s
sys 0m0.250s
## pigz
$ time pigz *txt
real 0m1.689s
user 0m11.580s
sys 0m0.317s
| gzip *.txt vs gzip test.txt & gzip test2.txt & |
1,308,650,780,000 |
Zip utility allows us to
Set the "last modified" time of the zip archive to the latest (oldest)
"last modified" time found among the entries in the zip archive
with
zip -o [...]
or
zip --latest-time [...]
What's the easiest way of doing the same with TAR?
|
new_file="$(find dir/ -type f -exec stat --printf='%n\0%Y\n' {} + | sort -k2,2 -nt '\0' | tail -n1 | cut -d '' -f1)"; tar -zcf foo.tar.gz dir/; touch -r "$new_file" foo.tar.gz
Example of what happens
tar -zcf foo.tar.gz dir/; touch -r fileX foo.tar.gz
You should change foo.tar.gz, dir/ in the above command
It
finds all files in dir/ (same to be tarred)
gets their last modify timestamp in seconds since epoch
sorts numerically and grabs the bottom one (most latest)
touch uses that file as reference when adjusting foo.tar.gz timestamp.
| tar's equivalent of zip -o (--latest-time)? |
1,308,650,780,000 |
I've got a bunch of .tar.gz files in different paths. I'd like to create a new .tar.gz file at some common ancestor of them and I don't want it to be composed of nested .tar.gz files. How can I easily flatten the archive once created?
|
Here is a bash script to recursively extract a tar archive, remove the original nested archives and create a new archive. It takes two arguments - first is the original archive, second is the name for the new archive. Both must be relative paths. This will extract the archive's directory, but will refuse to clobber any existing files (to do this remove the -k option from the tar command). Another approach to avoid clobbering would be to create a new directory for each archive and extract it there.
#!/bin/bash
archive="$1"
new_archive="$2"
# common extensions, full list at
# http://www.gnu.org/software/tar/manual/html_section/Compression.html#auto_002dcompress
match_archives='.*\.\(tar\|\(tar\.\(gz\|bz2\|xz\)\)\|\(tgz\|tbz\)\)$'
recursive_extract ()
{
retval=0
while read -rd '' path
do
if [ -e "$path" ]
then
nested_archive=${path##*/}
if cd "${path%/*}" && tar -xakf "$nested_archive"
then
rm "$nested_archive"
find . -regex "$match_archives" -print0 | recursive_extract
retval=$?
else
echo "Error extracting $nested_archive, not removing"
retval=1
fi
fi
done
return $retval
}
tmpdir=$(mktemp -d)
cd "$tmpdir"
tar -xaf "$OLDPWD/$archive" &&
find . -regex "$match_archives" -print0 | recursive_extract &&
tar -caf "$OLDPWD/$new_archive" * &&
cd -- "$OLDPWD" &&
rm -rf $tmpdir ||
echo "Errors, please review $tmpdir"
Note if the extraction results in an error, it is possible for the above to attempt to extract the same archive multiple times.
| How can I flatten nested .tar.gz files? |
1,308,650,780,000 |
Say I have a file.txt containing one line:
strawberry
Using
tar -czf archive.tar.gz file.txt
I can create a "gzipped archive" with the name archive.tar.gz, which is to my knowledge the equivalent of doing
tar -cf archive.tar file.txt
gzip archive.tar
Now, to extract archive.tar.gz, I would use
tar -xzf archive.tar.gz
to get my file.txt. This should be the equivalent of doing
gzip -d archive.tar.gz
tar -xf archive.tar
However, if I just use the command
tar -xf archive.tar.gz
without decompressing the file first through gzip or the --gzip option in tar, I still get my file.txt and can read the content.
I changed the file ending from archive.tar.gz to just archive.tar and got the same results.
How does tar know that it has to decompress the file first given that the file ending can be changed by the user? Do I miss some essential knowledge about file storing that allows tar to notice the compression?
|
tl;dr - GNU tar is smart
In the good old, bad old days you had to do exactly as you say and use -xzf. Nowadays, tar opens the archive, has a quick peek and if the contents look compressed then it invokes uncompress automatically for you. And that's it. You can take a look at man tar to see what compression algorithms it supports.
| Gzipped archives - Create using tar -czf and extract using only -xf? |
1,308,650,780,000 |
I'm writing the software upgrade functionality for a piece of embedded equipment. Currently the root file system is upgraded by taking a new rootfs.tar.gz file and unpacking it to the root file system (overwriting existing files and adding new ones). But this doesn't delete files that aren't in the new package.
So now I'm going to have to rm -rf all of the existing files, then unpack the new rootfs tar file. This seems like a lot of work, and takes a lot of time, and maybe the only thing that changed was one config file.
Instead, I'd like to be able to rsync the contents of a .tar.gz file to a directory. Is this doable without going through some sort of intermediary (meaning, without unpacking the tar to a temp directory then doing the rsync)?
|
Directly, there's not any way to do what you want. As mentioned in the comments, archivemount may be an option, though it has its own limitations (one of which being a dependency on a particular config option being enabled when you built the kernel).
However, there are two alternative options that come to mind:
Use SquashFS images instead of gzipped tar files. These can be mounted directly by the kernel, and in many cases will actually be smaller than an equivalent compressed tar file. You can then just use rsync from the mounted SquashFS image. If you're using Buildroot for actually building the root filesystem for this, there's an option to just directly generate such an image.
Instead of just having one root partition, use two. When you do an initial install, write all the data onto both partitions, and set up the bootloader to only boot from the one of them. When you upgrade the system, nuke the other partition, extract your root archive there, update the bootloader to boot from that, and then reboot. While this will take a bit longer to run updates than what you are proposing, it has a couple of pretty big advantages:
It lets you roll-back an upgrade that broke some functionality almost instantly. Instead of having to re-install the old firmware, you just update the bootloader to boot off of the other root partition and reboot.
It makes it very difficult to brick the system. Because you only update the bootloader after the update is otherwise finished, you can guarantee that you won't boot into a partial root filesystem because of power-loss in the middle of an update.
It makes it possible to recover from at-rest data corruption in the storage hardware without needing RAID (though if this needs to be a reliable device, you should seriously consider RAID anyway). If you know your current version is 'bad', just roll-back temporarily to the previous one, and then re-install the new version.
It reduces downtime. For your strategy, you have to do almost everything with the device functionally off-line (otherwise you run the risk of things failing mysteriously during the update). With this method, you can reduce the downtime required for an update to however long it takes to reboot the device.
| Restore files from tar using rsync |
1,308,650,780,000 |
I have a tar.gz archive that has been split apart like so -
files.tar.gz.part-aa.gz
...
...
...
files.tar.gz.part-ap.gz
I combine them using the cat command -
cat files* > files.tar.gz This combines the files into a 16 GB tar.gz file (which is expected)
If I try to run tar -zxf files.tar.gz I get the following error -
gzip: stdin: not in gzip format
tar: Child returned status 1
tar: error is not recoverable: exiting now
If I check the status of the file, I get this output -
file files.tar.gz then the output I get is weird - it says -
files.tar.gz: data
Any ideas?
EDIT -
I ran some of the commands that you guys were mentioning I should try. Here are the results -
Ran the following commands on several different file parts. As you will see, the output is different for some file parts.
Command Run: gunzip files.tar.gz.part-ap.gz
Output/Response: gzip: filestar.gz.part-ap.gz: not in gzip format
Command Run: file files.tar.gz.part-ap.gz
Output/Response: files.tar.gz.part-ap.gz: data
Command Run: gzip -d files.tar.gz.part-ap.gz
Output/Response: gzip: files.tar.gz.part-ap.gz: not in gzip format
Command Run: file files.tar.gz.part-ah.gz
Output/Response: files.tar.gz.part-ah.gz: 8086 relocatable (Microsoft)
Command Run: file files.tar.gz.part-ak.gz
Output/Response: files.tar.gz.part-ak.gz: data
Thanks.
|
Intepreting the file extensions, it looks like these are compressed portions of a compressed archive; try this:
gunzip files.tar.gz.part-??.gz
cat files.tar.gz.part-?? > files.tar.gz
tar xf files.tar.gz
| Issue getting split tar.gz file to recombine and extract. Not a valid .gz archive |
1,308,650,780,000 |
What are the reasons (historical, compatibility issues ..) for the existence of gunzip despite gzip offers the same (and extended) functionality?
I checked /bin/gzip and /bin/gunzip and they are separated executables.
Why not an unique executable and two symlinks?
|
Historically, it was common for compressors to have symmetric decompressors, for one simple reason: when you start distributing files compressed with a new compression program, you want recipients to be able to easily decompress them. Generally speaking, when they're separate, the decompression program is simpler and smaller than the compression program, which makes it easier to transfer (think of transmission speeds and storage capacity thirty years ago), and easier to port to new systems too.
I can't find the original releases of gzip so I don't know if that applied in this particular case; the oldest version I've found, 1.2.4, released in 1993, already used a single binary for gzip and gunzip. Note that current versions still support this; gunzip is just a wrapper shell script.
| Why the existence of gunzip? |
1,308,650,780,000 |
I am going to be combining 15 different gzip files. Ranging in size from 2 gigs to 15 gigs each so the files are relatively large. I have done research on the best way to do it, but I still have some questions.
Summary:
Starting with 15 different gzip files I want to finish with one sorted, duplicate free file in the gzip format.
For sake of conversation I will label the files as follows: file1, file2 ... file15
I am planning to use the sort command with the -u option. According to the man page for sort this means:
-u, --unique
with -c, check for strict ordering; without -c, output only the first of an equal run
So what I am thinking of doing is this:
sort -u file* > sortedFile
From my understanding I would have one file that is sorted and does not have any duplicates. From my test files I created this seems to be the case but just want to verify this is correct?
Now another wrinkle to my dilemma:
Because all of the files are in the gzip format is there a way to use zcat or another method to pipe the output to sort, without first having to convert from gzip to a text file, combine and then compress them back into gzip? This would save a huge amount of time. Any input is appreciated. I'm looking for advice on this; I am not against research nor am I married to my method, I would like some insight before I start running these commands against 120 gigs of data.
Thanks peoples!
|
The problem is that the individual files are unsorted, i.e. if you used something like sort -u file* > sortedFile, sort would have to load the contents of all files and then sort them. I assume this is inefficient given that you probably do not have more than 120 gigs of ram.
I would suggest that you first sort all files individually, and the merge them using sort -m, something along these lines (this code is untested!):
for f in file*; do
gzip -dc "$f" | sort > sorted.$f.bak
done
sort -m -u sorted.file*.bak > sortedFile
rm -f sorted.file*.bak
Relevant part of the sort man page (e.g. http://unixhelp.ed.ac.uk/CGI/man-cgi?sort):
-m, --merge
merge already sorted files; do not sort
Update: After reading https://stackoverflow.com/questions/930044/how-could-the-unix-sort-command-sort-a-very-large-file, I think that your original command might be just as fast, since sort splits up its input into manageable chunks anyway. Your command line would then look like:
sort <(zcat file1) <(zcat file2) ... <(zcat file15) > sortedFile
This would also enable the use of more than one core of your machine.
| Combining, sorting and deleting duplicates in numerous gzip files |
1,308,650,780,000 |
I have 120 .gz files (each about 5G) with dos line endings, my goal is to convert those to unix line endings, but I don't want to wait multiple days.
Here is my current approach:
function conv() {
tmpfile=$(mktemp .XXXXXX)
zcat $1 > $tmpfile
dos2unix $tmpfile
gzip $tmpfile
mv $tmpfile.gz $1
}
for a in $(ls *.fastq.gz); do
echo "$a"
conv "$a" &
done
Is there any way to get the line endings fixed without unpacking and repacking?
|
In any case, you need to uncompress and compress. But you don't need to save the result at every step. So:
set -o noclobber
for file in *.fastq.gz; do
gzip -d < "$file" | dos2unix | pigz > "$file.new" &&
mv -- "$file.new" "$file"
done
Here using pigz to leverage more processors at a time to do the compression.
You could also use GNU xargs' -P to run several of those pipelines in parallel:
printf '%s\0' *.fastq.gz |
xargs -r0 -P4 -n1 sh -o noclobber -c '
gzip -d < "$1" | dos2unix | gzip > "$1.new" &&
mv -- "$1.new" "$1"' sh
Here running 4 in parallel. Note that gzip -d, dos2unix and gzip in each pipeline already run in parallel (with gzip likely being the pipeline's bottleneck). Running more in parallel than you have CPUs is likely to degrade performance. If you have fast CPUs and/or slow storage, you might also find that I/O is the bottleneck and that running more than one in parallel actually already degrades the performance.
While you're at recompressing, that may be a good opportunity to switch to a different compression algorithm that may better suit your needs like some that compress better or faster or let you access data randomly in independent blocks or uncompress faster...
| Convert dos line endings in .gz zipped file faster |
1,308,650,780,000 |
I'm trying to figure out a way that I can update a compressed file (currently using zip, but am open to tar/gz/bz derivatives too) on a linux server without creating a temp file for the file to be compressed.
I'm compressing an entire domain's directory (about 36Gb +- at any given time) and I have limited drive space on the webserver. The issue being that, as zip builds the new compressed file, it creates a temp file which presumably overwrites the existing zip file when it's complete, but in the process, the 36Gb of the source directory + the 32Gb of the existing zip file + the 30 some Gb of the temp file come very close to maxing out my drive space and at some point in the future, it will exceed the drives available space.
Currently, the directory is backed up using a cronjob command, like so...0 0 * * * zip -r -u -q /home/user/SiteBackups/support.zip /home/user/public_html/support/
I don't want to delete the zip file each time, firstly because the directory is zipped every 4 hours and also because the directory is so large, it is rather resource intensive to re-zip the entire directory as opposed to just updating it - at least I believe that to be true. Perhaps I'm wrong?
Additionally, breaking it out into different commands for different directories will not work because a large portion of the data (30 ish Gb out of the total 36Gb) is all in one directory and the file names are GUIDs, so there's no way to target files in a predictable way.
Thanks in advance to the sysadmins with some terminal jujitsu!
|
This is almost certainly not going to work (update: see also this answer)
The Zip archive (but things change little with other archives) is built like a file system:
Suppose we were to update File#1 without moving File#2, and with File#1 potentially larger once compressed. This would require:
remove Central Header
add File#1 Data (2nd copy) after File#2
add Central Header back again, with updated offset for File#1
creating a "dead zone" at the beginning of the Zip file. It would be possible to use that area for further storage of another file. Basically you'd need to zip the incoming file into a temporary file, thus getting its final size; armed with that, you would scan the zip file and look for "holes". If a suitable "hole" exists, copy the temporary file inside the zip file, possibly leaving a smaller "hole"; otherwise, add it by replacing the central header.
While possible, managing the slack space inside the Zip archive as well as the coalescing of adjacent "holes" requires care, and to my knowledge nobody ever did that (I could for example write a compression-agnostic utility to replace a file inside a Zip file, using the main zip utility to generate the new compressed stream and replacing the old file name with a recognizable sequence to mark it as free space; it would be horribly slow).
The closest you can get to what you want would be to use a completely different format - you'd create, say, a btrfs file system on a loop device, setting it to the maximum compression available (I believe that would be LZO). Then you mount the loop device and use rsync to update it. Unmount the loop device, and the host file is a compressed archive... of sorts. Depending on the file nature, you might even be able to leverage btrfs's deduplication capability.
The compression ratio of compressed file systems is lower than Zip, but several files (PDF, ZIP obviously, most image formats like JPEG, PNG and GIF, modern (Libre)Office formats...) cannot be compressed, so this is not a problem. Since you say that the uncompressed files are 36Gb and the Zip is 32Gb, you're probably in this situation, and would probably benefit from a non-compressed format).
| Updating a large compressed file without creating temp file |
1,308,650,780,000 |
I'm making a backup of Raspbian (I know, this isn't the Raspberry Pi SE, but it's a Linux question and it probably isn't just Raspbian that has this problem and BTW, the size of the drive is 128GB). The first backup is only 68GB after compression. Then, I deleted the first backup. The next backup is over 100GB in size! If I deleted the second backup and do another one, I run out of space when making a backup (since I use sudo dd if=/dev/mmcblk0 bs=1M | gzip – | dd of=~/Desktop/backup-23-may-2020.gz and since the unused space isn't all zeros, the compression is getting worse)
To my knowledge, deleting a file using rm just marks that file deleted, instead of zeroing out the file. I want to be able to completely zero out all the deleted files so when I back up the whole disk, the compression is better because all the unused space are zeros.
Will this command do that? (You will need to install pv (pipe viewer) if you want to try out this command by running sudo apt install pv)
dd if=/dev/zero | pv -s 100g -S | dd of=~/zeros.txt
EDIT 2: Forgot the =. Thanks @Hermann.
I don't want to blindly execute this command because I did compile OpenCV on here and I refuse to do that again.
EDIT: According to df -h I have 102G of memory.
pi@raspberrypi:~$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 115G 7.9G 102G 8% /
devtmpfs 1.6G 0 1.6G 0% /dev
tmpfs 1.7G 0 1.7G 0% /dev/shm
tmpfs 1.7G 26M 1.7G 2% /run
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
...
|
Will this command do that?
It is missing the = after the if, but apart from that: Yes. I do not know whether it is the most efficient way to achieve it, but it will to the job.
Though I recommend a dd-only variant like this:
dd if=/dev/zero of=~/zeros.txt bs=16M status=progress
No need for pv.
For complete root file-system backups, I recommend an offline-backup with e2image: Power down the pi, move the card into a PC, do not mount the root partition. Instead, shrink it with resize2fs -M, create a copy with r2image -rap, then expand it again with resize2fs.
Using partclone is probably even better, but I have no first-hand experience with it.
| How can I completely zero out all free blocks (deleted files) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.