date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,333,655,309,000
After downloading a file that has a md5 checksum available I currently check it with md5 *file* | grep *given_checksum* e.g. md5 file.zip | grep -i B4460802B5853B7BB257FBF071EE4AE2 but it seemed funny to me to require grep and the pipe for what is surely a very common task. A stickler for doing things efficiently, I wondered there is a better way of doing this?
md5sum has a -c option to check an existing set of sums, and its exit status indicates success/failure. Example: $ echo "ff9f75d4e7bda792fca1f30fc03a5303 package.deb" | md5sum -c - package.deb: OK Find a nice resource here
A simpler way of comparing md5 checksum?
1,333,655,309,000
I have an ISO file, which I burned to a CD. Now how can I check if the CD is correctly created? I would like a command that calculate the hash sum that I can use to check with the hash sum I calculate on the ISO file. Ideally the command should: Work regardless of the ISO file: that is, I don't want to keep a list of hash sum for each file in the disc, or remember the number of blocks whatever Be relatively short: a one-line command is great, a chain of commands which is two line long is OK, a script that span one page is not Be fairly efficient: for example, dd the disc back to a file then run md5sum on the file is unacceptable If there is no answer that cannot satisfy all I will appreciate the nearest match too. Even better if you can tell me why it is not so straight-forward.
The basic problem is that we want to take the md5sum of the exact same information that was on the ISO originally. When you write the ISO to a CD, there is likely blank space on the end of the disk, which inevitably changes the md5sum. Thus, the the very shortest way: md5sum /dev/cdrom doesn't work. What does work (and is common in online documentation) is only reading the exact number of bytes from the device and then doing the md5sum. If you know the number of bytes you can do something like: dd if=/dev/cdrom bs=1 count=xxxxxxxx | md5sum where 'xxxxx' is the size of the iso in bytes. If you don't know the number of bytes off hand, but have the iso on your disk still, you can get them using ls by doing the something like the following (taken from here): dd if=/dev/cdrom | head -c `stat --format=%s file.iso` | md5sum There are many other one-line constructions that should work. Notice that in each case we are using dd to read the bytes from the disk, but we aren't piping these to a file, rather, we are handing them to md5sum straight away. Possible speed improvements can be made by doing some calculations to use a bigger block size (the bs= in the dd command).
Calculate md5sum of a CD/DVD
1,333,655,309,000
I have a bunch of Class 10 UHS-1 SDHC SD cards from different manufacturers. They are all partitioned as follows $ sudo fdisk -l /dev/sdj Disk /dev/sdj: 14.9 GiB, 15931539456 bytes, 31116288 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x0000de21 Device Boot Start End Sectors Size Id Type /dev/sdj1 2048 1050623 1048576 512M c W95 FAT32 (LBA) /dev/sdj2 1050624 2099199 1048576 512M 83 Linux /dev/sdj3 2099200 3147775 1048576 512M 83 Linux /dev/sdj4 3147776 31116287 27968512 13.3G 83 Linux I used a memory card duplicator to copy the images. All cards have the same content. When I mount the second partition of any two SD cards and compare the content, they are exactly the same. $ sudo mount -o ro /dev/sdg2 /mnt/system-a/ $ sudo mount -o ro /dev/sdj2 /mnt/system-b/ $ diff -r --no-derefence /mnt/system-a /mnt/system-b/ $ # prints nothing^ However, if I compare the sha1sum of the partitions, they sometimes differ $ sudo dd if=/dev/sdg2 | sha1sum 1048576+0 records in 1048576+0 records out 536870912 bytes (537 MB) copied, 12.3448 s, 43.5 MB/s ee7a16a8d7262ccc6a2e6974e8026f78df445e72 - $ sudo dd if=/dev/sdj2 | sha1sum 1048576+0 records in 1048576+0 records out 536870912 bytes (537 MB) copied, 12.6412 s, 42.5 MB/s 4bb6e3e5f3e47dc6cedc6cf8ed327ca2ca7cd7c4 - Stranger, if I compare these two drives using a binary diffing tool like radiff2, I see the following $ sudo dd if=/dev/sdg2 of=sdg2.img 1048576+0 records in 1048576+0 records out 536870912 bytes (537 MB) copied, 12.2378 s, 43.9 MB/s $ sudo dd if=/dev/sdj2 of=sdj2.img 1048576+0 records in 1048576+0 records out 536870912 bytes (537 MB) copied, 12.2315 s, 43.9 MB/s $ radiff2 -c sdg2.img sdj2.img 767368 767368 changes, even though diff didn't see any differences in the content! And for sanity, if I compare two partitions that had the same sha1sum, I see the following $ radiff2 -c sdj2.img sdf2.img 0 0 changes! Here is a breakdown of the different sha1sums I see from different cards. It seems like the manufacturer of the card has a large affect on what sha1sum I get when I use dd to read the drive. Despite differences in sha1sums, all these cards work for my purposes. However, it is making integrety checking difficult because I cannot compare sha1sums. How is it possible two SD card partitions could have different sha1sums, yet have the exact same content when mounted? Answer: So now it works as expected. To clear things up, the inconsistency was caused by the SySTOR duplicator I was using. The copy setting I had it use copied partition information and files, but it did not necessary dd the bits to ensure there was a one-to-one match.
Did you compare their contents immediately after writing the duplicated contents? If yes, they should come out exactly the same. For example, # Duplicate dd bs=16M if=/dev/sdg of=/dev/sdk # Comparing should produce no output cmp /dev/sdg /dev/sdk # Compare, listing each byte difference; also no output cmp -l /dev/sdg /dev/sdk This is only true if the cards have exactly the same size. Sometimes, even different batches of cards that are the same manufacturer and model come out with slightly different sizes. Use blockdev --getsize64 to get the exact size of the device. Also, if both cards have exactly identical sizes but you wrote an image to both cards that was smaller than the capacity of the cards, then the garbage that comes after the end of the image may cause differences to be reported. Once you mount any filesystem on the device, you will start to see differences. The filesystem implementation will write various things to the filesystem, such as an empty journal, or a flag/timestamp to mark the filesystem as clean, and then you won't see identical content anymore. I believe this can be the case under some circumstances even if you mount the filesystem read-only.
Why do these duplicated SD cards have different sha1sums for their content?
1,333,655,309,000
I am confused how md5sum --check is supposed to work: $ man md5sum -c, --check read MD5 sums from the FILEs and check them I have a file, I can pipe it to md5sum: $ cat file | md5sum 44693b9ef883e231cd9f90f737acd58f - When I want to check the integrity of the file tomorrow, how can I check if the md5sum is still 44693b9ef883e231cd9f90f737acd58f? Note cat file might be a stream. So I want to use the pipe as in my example, not md5sum file.
You do this: cat file | md5sum > sumfile And the next day you can do this: cat file | md5sum --check sumfile Which prints: -: OK if everything is alright.
check md5sum from pipe
1,333,655,309,000
When I log in to an SSH server/host I get asked whether the hash of its public key is correct, like this: # ssh 1.2.3.4 The authenticity of host '[1.2.3.4]:22 ([[1.2.3.4]:22)' can't be established. RSA key fingerprint is SHA256:CxIuAEc3SZThY9XobrjJIHN61OTItAU0Emz0v/+15wY. Are you sure you want to continue connecting (yes/no)? no Host key verification failed. In order to be able to compare, I used this command on the SSH server previously and saved the results to a file on the client: # ssh-keygen -lf /etc/ssh/ssh_host_rsa_key.pub 2048 f6:bf:4d:d4:bd:d6:f3:da:29:a3:c3:42:96:26:4a:41 /etc/ssh/ssh_host_rsa_key.pub (RSA) For some great reason (no doubt) one of these commands uses a different (newer?) way of displaying the hash, thereby helping man-in-the-middle attackers enormously because it requires a non-trivial conversion to compare these. How do I compare these two hashes, or better: force one command to use the other's format? The -E option to ssh-keygen is not available on the server.
ssh # ssh -o "FingerprintHash sha256" testhost The authenticity of host 'testhost (256.257.258.259)' can't be established. ECDSA key fingerprint is SHA256:pYYzsM9jP1Gwn1K9xXjKL2t0HLrasCxBQdvg/mNkuLg. # ssh -o "FingerprintHash md5" testhost The authenticity of host 'testhost (256.257.258.259)' can't be established. ECDSA key fingerprint is MD5:de:31:72:30:d0:e2:72:5b:5a:1c:b8:39:bf:57:d6:4a. ssh-keyscan & ssh-keygen Another approach is to download the public key to a system which supports both MD5 and SHA256 hashes: # ssh-keyscan testhost >testhost.ssh-keyscan # cat testhost.ssh-keyscan testhost ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItb... testhost ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0U... testhost ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMKHh... # ssh-keygen -lf testhost.ssh-keyscan -E sha256 256 SHA256:pYYzsM9jP1Gwn1K9xXjKL2t0HLrasCxBQdvg/mNkuLg testhost (ECDSA) 2048 SHA256:bj+7fjKSRldiv1LXOCTudb6piun2G01LYwq/OMToWSs testhost (RSA) 256 SHA256:hZ4KFg6D+99tO3xRyl5HpA8XymkGuEPDVyoszIw3Uko testhost (ED25519) # ssh-keygen -lf testhost.ssh-keyscan -E md5 256 MD5:de:31:72:30:d0:e2:72:5b:5a:1c:b8:39:bf:57:d6:4a testhost (ECDSA) 2048 MD5:d5:6b:eb:71:7b:2e:b8:85:7f:e1:56:f3:be:49:3d:2e testhost (RSA) 256 MD5:e6:16:94:b5:16:19:40:41:26:e9:f8:f5:f7:e7:04:03 testhost (ED25519)
How to compare different SSH fingerprint (public key hash) formats?
1,333,655,309,000
I am looking for a simple way to pipe the result of md5sum into another command. Something like this: $echo -n 'test' | md5sum | ... My problem is that md5sum outputs not only the hash of the string, but also an hypen, which indicates that the input came from stdin. I checked the man file and I didn't find any flags to control output.
You can use the command cut; it allows you to cut a certain character/byte range from every input line. Since the MD5 hash has fixed length (32 characters), you can use the option -c 1-32 to keep only the first 32 characters from the input line: echo -n test | md5sum | cut -c 1-32 Alternatively, you can tell cut to split the line at the every space and output only the first field: (note the quotes around the space character) echo -n test | md5sum | cut -d " " -f 1 See the cut manpage for more options.
How to pipe md5 hash result in shell
1,333,655,309,000
I began to use org-mode for planning out my tasks in GTD-style system. Putting every org files in a directory of a Dropbox folder, I run emacs to edit / manage these files from three different local machines: Cygwin, Mac OS X, and Debian. As I also use MobileOrg to access these org files from my iPad, a checksums.dat file must be kept up-to-date when any changes are made. This can be done by running md5sum *.org > checksums.dat. The problem is that there are three different commands for md5sum command: md5sum.exe in Cygwin, md5 in Mac OS X, and md5sum in Debian. The most ideal situation is a makefile, stored in a Dropbox foder, detects which command is available in the current machine and runs that command to do the md5 checksum operation.
Possible commands to generate a checksum Unfortunately, there's no standard utility to generate a cryptographic checksum. There is a standard utility to generate a CRC: cksum; this may be sufficient for your purpose of detecting changes in a non-hostile environment. I would recommend using SHA1 rather than MD5. There aren't many systems that have an MD5 utility but no SHA1, and if you're going to use cryptographic checksums, you might as well use an algorithm with no known method to find collisions (assuming you also check the size). One tool that's not standard but common and can calculate digests is OpenSSL. It's available for Cygwin, Debian and OSX, but unfortunately not part of the default installation on OSX. openssl dgst -sha1 On OSX 10.6, there is a shasum utility, which is also present on Debian (it's part of the perl package) and I believe on Cygwin too. This is a Perl script. Most unix systems have Perl installed, so you could bundle that script alongside your makefile if you're worried about this script not being available everywhere. Selecting the right command for your system Ok, let's say you really can't find a command that works everywhere. In the shell Use the type built-in to see if a command is available. sum= for x in sha1sum sha1 shasum 'openssl dgst -sha1'; do if type "${x%% *}" >/dev/null 2>/dev/null; then sum=$x; break; fi done if [ -z "$sum" ]; then echo 1>&2 "Unable to find a SHA1 utility"; exit 2; fi $sum *.org GNU make You can use the shell function to run a shell snippet when the makefile is loaded and store the output in a variable. sum := $(shell { command -v sha1sum || command -v sha1 || command -v shasum; } 2>/dev/null) %.sum: % $(sum) $< >$@ Portable (POSIX) make You can only run shell commands in rule, so each rule that computes a checksum has to contain the lookup code. You can put the snippet in a variable. Remember that separate lines in rules are evaluated independently. Also remember that $ signs that are to be passed to the shell need to be escaped to $$. determine_sum = \ sum=; \ for x in sha1sum sha1 shasum 'openssl dgst -sha1'; do \ if type "$${x%% *}" >/dev/null 2>/dev/null; then sum=$$x; break; fi; \ done; \ if [ -z "$$sum" ]; then echo 1>&2 "Unable to find a SHA1 utility"; exit 2; fi checksums.dat: FORCE $(determine_sum); \ $$sum *.org
how can a makefile detect whether a command is available in the local machine?
1,333,655,309,000
So I'm setting up a WordPress backup guide/making a backup schedule for myself for real. I want to do MySQL dumps daily, but the command either requires -p then user input or --password="plain text password" Could I pass it to a file that is atleast MD5 or better hashed and protected to increase security but make the command require no user input? Any help is appreciated! For Reference here is the command I want to run mysqldump -u [username] --password=~/wp_backups/sqldumps/.sqlpwd [database name] > ~/wp_backups/sqldumps/"$(date '+%F').sql"
You have following password options: provide the password on the command line through the -p option provide the password via the MYSQL_PWD environment variable put your configuration in the ~/.my.cnf file under the [mysqldump] section In all cases your client needs a plain text password to be able to authenticate. You mentioned hashes, but the trait of a hash is that it's a one way conversion function (i.e. you won't be able to restore the original password from a hash), therefore it's unusable as the authentication token. Since you are backing up the Wordpress database from, allegedly, the same account that hosts your Wordpress there is no security improvements of trying to hide the password from the user that runs Wordpress (the database credentials can be easily extracted from the wp-config.php file anyway). So, I'd suggest to define the following ~/.my.cnf: [mysqldump] host = your_MySQL_server_name_or_IP port = 3306 user = database_user_name password = database_password Then ensure that the file has the 0600 permissions. This way mysqldump does not need any database credential specified on its command line (they will be read from the ~/.my.cnf file.
MySQLdump via crontab - Pass --password=/hashed/password/file so I can use via crontab w/o using plain text password
1,333,655,309,000
I want to find the md5 hash of the string "a", but running echo "a" | md5sum gives me another hash than what I get if I search the internet (for example using DuckDuckGo or the first search result I found). Running echo "a" | md5sum gives me "60b725f10c9c85c70d97880dfe8191b3", but it should be "0cc175b9c0f1b6a831c399e269772661". If I make a reverse hash lookup for "60b725f10c9c85c70d97880dfe8191b3", I do however get "a".
The reason for the hashes being different is that echo includes a newline at the end of the output string to make it pretty. This can be prohibited by the -n flag (if your implementation of echo supports it), or by using another program (like printf): > echo "a" | md5sum 60b725f10c9c85c70d97880dfe8191b3 - > echo -n "a" | md5sum 0cc175b9c0f1b6a831c399e269772661 - > printf "a" | md5sum 0cc175b9c0f1b6a831c399e269772661 -
Why does `md5sum` not give the same hash as the Internet does?
1,333,655,309,000
I often have large directories that I want to transfer to a local computer from a server. Instead of using recursive scp or rsync on the directory itself, I'll often tar and gzip it first and then transfer it. Recently, I've wanted to check that this is actually working so I ran md5sum on two independently generated tar and gzip archives of the same source directory. To my suprise, the MD5 hash was different. I did this two more times and it was always a new value. Why am I seeing this result? Are two tar and gzipped directories both generated with the same version of GNU tar in the exact same way not supposed to be exactly the same? For clarity, I have a source directory and a destination directory. In the destination directory I have dir1 and dir2. I'm running: tar -zcvf /destination/dir1/source.tar.gz source && md5sum /destination/dir1/source.tar.gz >> md5.txt tar -zcvf /destination/dir2/source.tar.gz source && md5sum /destination/dir2/source.tar.gz >> md5.txt Each time I do this, I get a different result from md5sum. Tar produces no errors or warnings.
From the looks of things you’re probably being bitten by gzip timestamps; to avoid those, run GZIP=-n tar -zcvf ... Note that to get fully reproducible tarballs, you should also impose the sort order used by tar: GZIP=-n tar --sort=name -zcvf ... If your version of tar doesn’t support --sort, use this instead: find source -print0 | LC_ALL=C sort -z | GZIP=-n tar --no-recursion --null -T - -zcvf ...
Tar produces different files each time
1,333,655,309,000
Situation I'm on FreeBSD 11.2 without GUI. I'm brand new to BSD systems. Suppose we have a SHA512SUM file generated on FreeBSD with: sha512 encrypt-file-aes256 decrypt-file-aes256 > SHA512SUM It looks different from the Linux format, which from Linux can be generated using --tag switch: SHA512 (encrypt-file-aes256) = 9170caaa45303d2e5f04c21732500980f3b06fc361018f953127506b56d3f2f46c95efdc291e160dd80e39b5304f327d83fe72c625ab5f31660db9c99dbfd017 SHA512 (decrypt-file-aes256) = 893693eec618542b0b95051952f9258824fe7004c360f8e6056a51638592510a704e27b707b9176febca655b7df581c9a6e2220b6511e8426c1501f6b2dd48a9 Question How do I check this file? There is no --check option in the man page. Progress So far, I am only able to manually test a single file with hard-coding the hash sum: sha512 -c "9170caaa45303d2e5f04c21732500980f3b06fc361018f953127506b56d3f2f46c95efdc291e160dd80e39b5304f327d83fe72c625ab5f31660db9c99dbfd017" encrypt-file-aes256 && echo $? Scripting-wise, I don't yet see a way of checking the whole SHA512SUM file automatically. Note, that it may contain many more files than the two as in my case.
You can use the shasum (man page) tool, which has a -c option to check against a checksum file and is a front-end to several checksum algorithms including SHA-512. You can use a command like the one below to check both files: $ shasum -a 512 -c SHA512SUM.sha512sum The shasum tool is only able to parse the output format compatible with the one produced by sha512sum (the tool usually shipped in Linux distributions.) You can convert from a BSD style checksum file to a Linux style one with a simple sed command: $ sed -ne 's/^SHA512 (\(.*\)) = \(.*\)/\2 \1/p' SHA512SUM >SHA512SUM.sha512sum (Though if you're generating the checksums yourself, then also using shasum to generate them is a good option, also compatible with the tools found on Linux.) The shasum tool is provided by the FreeBSD port p5-Digest-SHA and can be installed with pkg by running: $ sudo pkg install p5-Digest-SHA
How to check a hash sum file on FreeBSD?
1,333,655,309,000
When it comes to passwd/user-password-crypted statement in a preseed file, most examples use an MD5 hash. Example: # Normal user's password, either in clear text #d-i passwd/user-password password insecure #d-i passwd/user-password-again password insecure # or encrypted using an MD5 hash. #d-i passwd/user-password-crypted password [MD5 hash] From Debian's Appendix B. Automating the installation using preseeding. A few sources show that it's also possible to use SHA-512: Try using a hashed password like this: $ mkpasswd -m sha-512 [...] And then in your preseed file: d-i passwd/user-password-crypted password $6$ONf5M3F1u$bpljc9f1SPy1w4J2br[...] From Can't automate user creation with preseeding on AskUbuntu. This is slightly better than MD5, but still doesn't resist well against brute force and rainbow tables. What other algorithms can I use? For instance, is PBKDF2 supported, or am I limited by the algorithms used in /etc/shadow, that is MD5, Blowfish, SHA-256 and SHA-512?
You can use anything which is supported in the /etc/shadow file. The string given in the preseed file is just put into /etc/shadow. To create a salted password to make it more difficult just use mkpasswd with the salt option (-S): mkpasswd -m sha-512 -S $(pwgen -ns 16 1) mypassword $6$bLyz7jpb8S8gOpkV$FkQSm9YZt6SaMQM7LPhjJw6DFF7uXW.3HDQO.H/HxB83AnFuOCBRhgCK9EkdjtG0AWduRcnc0fI/39BjmL8Ee1 In the command above the salt is generated by pwgen.
What hash algorithms can I use in preseed's passwd/user-password-crypted entry?
1,333,655,309,000
I was trying to compute sha256 for a simple string, namely "abc". I found out that using sha256sum utility like this: sha256sum file_with_string gives results identical to: sha256sum # enter, to read input from stdin abc ^D namely: edeaaff3f1774ad2888673770c6d64097e391bc362d7d6fb34982ddf0efd18cb Note, that before the end-of-input signal another newline was fed to stdin. What bugged me at first was that when I decided to verify it with an online checksum calculator, the result was different: ba7816bf8f01cfea414140de5dae2223b00361a396177a9cb410ff61f20015ad I figured it might have had something to do with the second newline I fed to stdin, so I tried inserting ^D twice this time (instead of using newline) with the following result: abcba7816bf8f01cfea414140de5dae2223b00361a396177a9cb410ff61f20015ad Now, this is of course poorly formatted (due to the lack of a newline character), but that aside, it matches the one above. After that, I realized I clearly fail to understand something about input parsing in the shell. I double-checked and there's no redundant newline in the file I specified initially, so why am I experiencing this behavior?
The difference is the newline. First, let's just collect the sha256sums of abc and abc\n: $ printf 'abc\n' | sha256sum edeaaff3f1774ad2888673770c6d64097e391bc362d7d6fb34982ddf0efd18cb - $ printf 'abc' | sha256sum ba7816bf8f01cfea414140de5dae2223b00361a396177a9cb410ff61f20015ad - So, the ba...ad sum is for the string abc, while the ed..cb one is for abc\n. Now, if your file is giving you the ed..cb output, that means your file has a newline. And, given that "text files" require a trailing newline, most editors will add one for you if you create a new file. To get a file without a newline, use the printf approach above. Note how file will warn you if your file has no newline: $ printf 'abc' > file $ file file file: ASCII text, with no line terminators And $ printf 'abc\n' > file2 $ file file2 file2: ASCII text And now: $ sha256sum file file2 ba7816bf8f01cfea414140de5dae2223b00361a396177a9cb410ff61f20015ad file edeaaff3f1774ad2888673770c6d64097e391bc362d7d6fb34982ddf0efd18cb file2
Why are end-of-file and end-of-input-signal treated differently by sha256sum?
1,333,655,309,000
I have downloaded a Debian ISO with jigdo, the download has finished successfully, and printed the following message: FINISHED --2021-01-22 11:57:20-- Total wall clock time: 4.3s Downloaded: 9 files, 897K in 1.8s (494 KB/s) Found 9 of the 9 files required by the template Successfully created `debian-testing-amd64-netinst.iso' ----------------------------------------------------------------- Finished! The fact that you got this far is a strong indication that `debian-testing-amd64-netinst.iso' was generated correctly. I will perform an additional, final check, which you can interrupt safely with Ctrl-C if you do not want to wait. MD5 from template: l2l48nbYVylT4qrQ0Eq3ww MD5 from image: l2l48nbYVylT4qrQ0Eq3ww OK: MD5 Checksums match, image is good! WARNING: MD5 is not considered a secure hash! WARNING: It is recommended to verify your image in other ways too! Debian offers three ways to verify an ISO image: sha1sums, md5sums and sha256sums. The sha1sum is considered vulnerable to collision attack but I have heard nothing about MD5. Why is MD5SUM considered an insecure hash? Is the SHA256SUM the only secure way to verify a downloaded debian ISO?
MD5 and SHA-1 are both vulnerable to (chosen-prefix) collision attacks. The SHA2 (SHA-256, SHA-384, SHA-512, …) and SHA3 families of hash functions are not. What a chosen-prefix collision attack means is that given a prefix and a suffix, it's possible to find two middles such that prefix+middle1+suffix and prefix+middle2+suffix have the same hash. In the context of system images, this allows someone to generate an image that's perfectly fine, distribute it, then surreptitiously change it to replace part of it with some malware. The modification won't be detected easily because the original version and the modified version both have the same hash. For MD5, a collision attack is dirt cheap, you can run it instantly on your PC. For SHA-1, a collision attack is moderately expensive (~USD 45k as of 2020). For SHA2 and SHA3, there is no known way to generate a collision attack even with an NSA level of computing power. Even with MD5 or SHA-1, there is no known way to take an existing image and find another image with the same hash (a second preimage attack). The original image must be specifically crafted to make the attack possible.
Why is verifying downloads with MD5 hash considered insecure?
1,333,655,309,000
I have to test a hash function and I want to change only a single bit of a specific file. I tried with the dd command. That works, but I can only change a whole byte and not just a bit. sudo dd if=/dev/zero of=/file.bin bs=1 seek=10 count=1 conv=notrunc I also tried the sed command with a regex, but as I don't know the content of the file, I can't just change an "a" to a "b". Does anyone know a command for doing this?
Since the file may contain nulls, text-oriented filters like sed are going to fail. But you can use a programming language that can handle nulls, like perl or python. Here's a solution for Python 3. It's a few lines longer than strictly necessary, for readability. #!/usr/bin/env python3 """Toggle the bit at the specified offset. Syntax: <cmdname> filename bit-offset""" import sys fname = sys.argv[1] # Convert bit offset to bytes + leftover bits bitpos = int(sys.argv[2]) nbytes, nbits = divmod(bitpos, 8) # Open in read+write, binary mode; read 1 byte fp = open(fname, "r+b") fp.seek(nbytes, 0) c = fp.read(1) # Toggle bit at byte position `nbits` toggled = bytes( [ ord(c)^(1<<nbits) ] ) # print(toggled) # diagnostic output # Back up one byte, write out the modified byte fp.seek(-1, 1) # or absolute: fp.seek(nbytes, 0) fp.write(toggled) fp.close() Save it in a file (e.g., bitflip), make it executable, and run it with the filename to modify and the offset in bits. Note that it modifies the file in place. Run it twice with the same offset and you'll get your file restored.
Change only one bit in a file
1,333,655,309,000
Background I am about to migrate files from my old NAS to a new one, and want to verify the data integrity. The old NAS (Debian) is using Linux Ext3 file system, whilst the new one (FreeNAS) is based on ZFS. To speed up the integrity validation I am trying to use the triage approach: first validate all file sizes secondly md5 hash the first 512 bytes of each file lastly md5 hash entire file The idea being that the first two steps would filter out obviously corrupted files, and be much quicker to detect than running md5 in bulk for TB of files. Question I have constructed a bash command for performing a md5 hash of a directory structure, and sorting the output based on file name to ensure a deterministic order on my Linux NAS. #find somedir -type f -exec md5sum {} \; | sort -k 34; 12e761f96223145aa63f4f48f252d7fb /somedir/foo.txt 18409feb00b6519c891c751fe2541fdc /somedir/bar.txt But how to modify above if I want to md5 only the first 512 bytes of each file?
You can use dd to pipe only the first 512 bytes to md5sum. However this will cause md5sum to be oblivious of the filename, so in addition replace - with the filename again. find . -type f -exec sh -c "dd if={} bs=512 count=1 2>/dev/null | md5sum | sed s\|-\|{}\|" \; | sort -k 34;
md5 hash only first 512 bytes of file
1,333,655,309,000
So I'm trying to check dozens of files using a shell script. The file check happens at different times. Is there a way to do this? md5sum -c 24f4ce42e0bc39ddf7b7e879a File.name or even better sha512sum sha512sum -c 24f4ce42e0bc39ddf7b7e879a File.name Right now I have to do this: md5sum -c file.md5sums File.name Or better yet I could have all the md5sums in a single file and check them like this: md5sum -c `sed 1p file.md5sums` File.name md5sum -c `sed 2p file.md5sums` File.name md5sum -c `sed 3p file.md5sums` File.name md5sum -c `sed 4p file.md5sums` File.name It just seems silly to have dozens of files with single entries in them.
If you're doing this in a script then you can just do a simple comparison check e.g. if [ "$(md5sum < File.name)" = "24f4ce42e0bc39ddf7b7e879a -" ] then echo Pass else echo Fail fi Note the extra spaces and - needed to match the output from md5sum. You can make this a one-liner if it looks cleaner [[ "$(md5sum < File.name)" = "24f4ce42e0bc39ddf7b7e879a -" ]] && echo Pass || echo Fail
md5sum check (no file)
1,333,655,309,000
I'm puzzled by the hash (ASCII) code stored under Linux (Ubuntu) /etc/shadow. Taking a hypothetical case, let password be 'test', salt be 'Zem197T4'. By running following command, $ mkpasswd -m SHA-512 test Zem197T4 A long series of ASCII characters are generated (This is actually how Linux store in the /etc/shadow) $6$Zem197T4$oCUr0iMuvRJnMqk3FFi72KWuLAcKU.ydjfMvuXAHgpzNtijJFrGv80tifR1ySJWsb4sdPJqxzCLwUFkX6FKVZ0 When using online SHA-512 generator (e.g. http://www.insidepro.com/hashes.php?lang=eng), what is generated is some hex code as below: option 1) password+salt 8d4b73598280019ef818e44eb4493c661b871bf758663d52907c762f649fe3355f698ccabb3b0c59e44f1f6db06ef4690c16a2682382617c6121925082613fe2 option 2) salt+password b0197333c018b3b26856473296fcb8637c4f58ab7f4ee2d6868919162fa6a61c8ba93824019aa158e62ccf611c829026b168fc4bf90b2e6b63c0f617198006c2 I believe these hex code should be the 'same thing' as the ascii code generated by mkpasswd. But how are they related? Hope someone could enlighten me?
On Ubuntu/Debian mkpasswd is part of the package whois and implemented in mkpasswd.c which as actually just a sophisticated wrapper around the crypt() function in glibc declared in unistd.h. crypt() takes two arguments password and salt. Password is "test" in this case, salt is prepended by "$6$" for the SHA-512 hash (see SHA-crypt) so "$6$Zem197T4" is passed to crypt(). Maybe you noticed the -R option of mkpasswd which determines the number of rounds. In the document you'll find a default of 5000 rounds. This is the first hint why the result would never be equal to the simple concatenation of salt and password, it's not hashed only once. Actually if you pass -R 5000 you get the same result. In this case "$6$rounds=5000$Zem197T4" is passed to crypt() and the implementation in glibc (which is the libc of Debian/Ubuntu) extracts the method and number of rounds from this. What happens inside crypt() is more complicated than just computing a single hash and the result is base64 encoded in the end. That's why the result you showed contains all kinds of characters after the last '$' and not only [0-9a-f] as in the typical hex string of a SHA-512 hash. The algorithm is described in detail in the already mentioned SHA-Crypt document.
SHA512 salted hash from mkpasswd doesn't match an online version
1,333,655,309,000
I want to know my /etc/shadow password hash if its SHA or MD or something else. From what I read, it is related to the $ sign, but I don't have any dollar signs. Im using Ubuntu 16 Example: user:0.7QYSH8yshtus8d:18233:0:99999:7:::
The shadow(5) manual on Ubuntu refers to the crypt(3) manual. The crypt(3) manual says that the default password encryption algorithm is DES. It goes on to say that the glibc2 library function also supports MD5 and at least SHA-256 and SHA-512, but that an entry in /etc/shadow for a password encrypted by one of these algorithms would look like $1$salt$encrypted (for MD5), $5$salt$encrypted (for SHA-256), or $6$salt$encrypted (for SHA-512), where each $ is a literal $ character, where salt is a salt of up to 16 characters, and where encrypted is the actual hash. Since your encrypted password does not follow that pattern, I'm assuming that it's encrypted using the default DES algorithm.
How to know if password in /etc/shadow is hashed with SHA or MD?
1,333,655,309,000
In a directory withmultiple subdirectories but only one folder deep containing tiff-files I'd like to generate a md5 checksum that writes the filename with the corresponding checksum into a textfile. For example in directory TIFF I have 2 subdirectories: TIFF |- b0125TIFF |- b_0000_001.tif |- b_0000_002.tif |- b_0000_003.tif |- b_0000_004.tif |- c0126TIFF |- c_0000_001.tif |- c_0000_002.tif |- c_0000_003.tif |- c_0000_004.tif My expected textfile (checksum should be of course different): ** foo.md5: 188be1dbd4f6bcfdef8d25639473e6ec *b0125TIFF/b_0000_001.tif 188be1dbd4f6bcfdef8d25639473e6ec *b0125TIFF/b_0000_002.tif 188be1dbd4f6bcfdef8d25639473e6ec *b0125TIFF/b_0000_003.tif 188be1dbd4f6bcfdef8d25639473e6ec *b0125TIFF/b_0000_004.tif 188be1dbd4f6bcfdef8d25639473e6ec *c0126TIFF/c_0000_001.tif 188be1dbd4f6bcfdef8d25639473e6ec *c0126TIFF/c_0000_002.tif 188be1dbd4f6bcfdef8d25639473e6ec *c0126TIFF/c_0000_003.tif 188be1dbd4f6bcfdef8d25639473e6ec *c0126TIFF/c_0000_004.tif How can I achieve that? I know that this generates the checksum recursively in one directory: find -s . -type f -exec md5 -q {} \; | md5
You don't want to pass the output of the find and md5 through md5, that would just give you an MD5 checksum of a lot of MD5 checksums... $ find TIFF -type f -name '*.tif' -exec md5 {} ';' >md5.txt $ cat md5.txt MD5 (TIFF/b0125TIFF/file-1.tif) = d41d8cd98f00b204e9800998ecf8427e MD5 (TIFF/b0125TIFF/file-2.tif) = d41d8cd98f00b204e9800998ecf8427e MD5 (TIFF/b0125TIFF/file-3.tif) = d41d8cd98f00b204e9800998ecf8427e MD5 (TIFF/c0126TIFF/file-1.tif) = d41d8cd98f00b204e9800998ecf8427e MD5 (TIFF/c0126TIFF/file-2.tif) = d41d8cd98f00b204e9800998ecf8427e MD5 (TIFF/c0126TIFF/file-3.tif) = d41d8cd98f00b204e9800998ecf8427e The md5 implementation on macOS does not support verifying checksums with md5 -c unfortunately, but the shasum utility does: $ find TIFF -type f -name '*.tif' -exec shasum {} ';' >sums.txt $ cat sums.txt da39a3ee5e6b4b0d3255bfef95601890afd80709 TIFF/b0125TIFF/file-1.tif da39a3ee5e6b4b0d3255bfef95601890afd80709 TIFF/b0125TIFF/file-2.tif da39a3ee5e6b4b0d3255bfef95601890afd80709 TIFF/b0125TIFF/file-3.tif da39a3ee5e6b4b0d3255bfef95601890afd80709 TIFF/c0126TIFF/file-1.tif da39a3ee5e6b4b0d3255bfef95601890afd80709 TIFF/c0126TIFF/file-2.tif da39a3ee5e6b4b0d3255bfef95601890afd80709 TIFF/c0126TIFF/file-3.tif $ shasum -c sums.txt TIFF/b0125TIFF/file-1.tif: OK TIFF/b0125TIFF/file-2.tif: OK TIFF/b0125TIFF/file-3.tif: OK TIFF/c0126TIFF/file-1.tif: OK TIFF/c0126TIFF/file-2.tif: OK TIFF/c0126TIFF/file-3.tif: OK shasum calculates the SHA1 hash of a file by default.
OSX: Generate MD5 checksum recursively in a textfile containing files with corresponding checksum
1,333,655,309,000
I used md5sum with pv to check 4 GiB of files that are in the same directory: md5sum dir/* | pv -s 4g | sort The command completes successfully in about 28 seconds, but pv's output is all wrong. This is the sort of output that is displayed throughout: 219 B 0:00:07 [ 125 B/s ] [> ] 0% ETA 1668:01:09:02 It's like this without the -s 4g and | sort aswell. I've also tried it with different files. I've tried using pv with cat and the output was fine, so the problem seems to be caused by md5sum.
The pv utility is a "fancy cat", which means that you may use pv in most situations where you would use cat. Using cat with md5sum, you can compute the MD5 checksum of a single file with cat file | md5sum or, with pv, pv file | md5sum Unfortunately though, this does not allow md5sum to insert the filename into its output properly. Now, fortunately, pv is a really fancy cat, and on some systems (Linux), it's able to watch the data being passed through another process. This is done by using its -d option with the process ID of that other process. This means that you can do things like md5sum dir/* | sort >sums & sleep 1 pv -d "$(pgrep -n md5sum)" This would allow pv to watch the md5sum process. The sleep is there to allow md5sum, which is running in the background, to properly start. pgrep -n md5sum would return the PID of the most recently started md5sum process that you own. pv will exit as soon as the process that it is watching terminates. I've tested this particular way of running pv a few times and it seems to generally work well, but sometimes it seems to stop outputting anything as md5sum switches to the next file. Sometimes, it seems to spawn spurious background tasks in the shell. It would probably be safest to run it as md5sum dir/* >sums & sleep 1 pv -W -d "$!" sort -o sums sums The -W option will cause pv to wait until there's actual data being transferred, although this does also not always seem to work reliably.
Using pv with md5sum
1,333,655,309,000
I would like to rename some files to their contents' MD5 sum; for example, if file foo is empty, it should be renamed to d41d8cd98f00b204e9800998ecf8427e. Does it have to be script or can I use something like the rename tool?
Glenn's answer is good; here's a refinement for multiple files: md5sum file1 file2 file3 | # or *.txt, or whatever while read -r sum filename; do mv -v "$filename" "$sum" done If you're generating files with find or similar, you can replace the md5sum invocation with something like find . <options> -print0 | xargs -0 md5sum (with the output also piped into the shell loop). This is taking the output of md5sum, which consists of multiple lines with a sum and then the file it corresponds to, and piping it into a shell loop which reads each line and issues a mv command that renames the file from the original name to the sum. Any files with identical sums will be overwritten; however, barring unusual circumstances (like if you're playing around with md5 hash collisions), that will mean they had the same contents, so you don't lose any data anyway. If you need to introduce other operations on each file, you can put them in the loop, referring to the variables $filename and $sum, which contain the original filename and the MD5 sum respectively.
How to rename multiple files to their contents' MD5 sum?
1,333,655,309,000
I would like to get the sha1 checksums of all files inside a simple tar archive as a list, or a new file Without using the disk space to unpack the big tar file. Something with piping and calculating the sha1 on the fly, directing the output to /dev/null I searched google a lot and did some experiments with pipes but could not get very far. Would really save me a lot of time checking backups.
Too easy : tar xvJf myArchive.tar.xz --to-command=sha1sum The result is like this : z/ z/DOCUMENTATION 3c4d9df9bcbd1fb756b1aaba8dd6a2db788a8659 *- z/getnameenv.sh 1b7f1ef4bbb229e4dc5d280c3c9835d9d061726a *- Or create "tarsha1.sh" with : #!/bin/bash sha1=`sha1sum` echo -n $sha1 | sed 's/ .*$//' echo " $TAR_FILENAME" Then use it this way : tar xJf myArchive.tar.xz --to-command=./tarsha1.sh The result is like this : 3c4d9df9bcbd1fb756b1aaba8dd6a2db788a8659 z/DOCUMENTATION 1b7f1ef4bbb229e4dc5d280c3c9835d9d061726a z/getnameenv.sh
How to create sha1 checksums of files inside a tar archive without using much disk space
1,382,691,864,000
I want to search the file and replace specific pattern with its hash (SHA1) values. For example, let file.txt has the following content: one S56G one two three four five V67X six and I want to replace the pattern [A-Z][0-9]\{2\}[A-Z] with SHA1 value of the match. In the example above, the matches are S56G and V67X. Using sed, I tried: sed "s/[A-Z][0-9]\{2\}[A-Z]/$(echo \& | sha1sum)/g" without success, as the result is always the hash value of '&'. I also tried ge flag, with the command: sed 's/[A-Z][0-9]\{2\}[A-Z]/echo & | sha1sum/ge' which throws errors: sh: 1: one: not found sha1sum: one: No such file or directory sha1sum: two: No such file or directory sha1sum: three: No such file or directory
In your attempt the command substitution ($(…)) is performed before sed being executed and the string passed to it as parameter. Use a scripting language which regular expression substitution supports code execution: perl -MDigest::SHA=sha1_hex -pe 's/[A-Z][0-9]{2}[A-Z]/sha1_hex$&/ge' inputfile php -R 'echo preg_replace("/[A-Z][0-9]{2}[A-Z]/e","sha1(\$0)",$argn),"\n";' inputfile ruby -rdigest/sha1 -pe '$_.gsub!(/[A-Z][0-9]{2}[A-Z]/){Digest::SHA1.hexdigest$&}' inputfile python -c 'import sys,fileinput,re,hashlib;[sys.stdout.write(re.sub("[A-Z][0-9]{2}[A-Z]",lambda s:hashlib.sha1(s.group(0)).hexdigest(),l))for l in fileinput.input()]' inputfile
using sed to replace pattern with hash values
1,382,691,864,000
How can I make a file digest under Linux with the RIPEMD-160 hash function, from the command line?
You can use openssl for that (and for other hash algorithms): $ openssl list-message-digest-commands md4 md5 mdc2 rmd160 sha sha1 $ openssl rmd160 /usr/bin/openssl RIPEMD160(/usr/bin/openssl)= 788e595dcadb4b75e20c1dbf54a18a23cf233787
RIPEMD-160 file digest
1,382,691,864,000
I'm experiencing exactly the same issue as described in this question: Kali Linux: apt-get update returns “Hash Sum mismatch” error. Before you mark this as a duplicate however, I have tried the solutions posted there, as well as on numerous other sites, including: sudo apt-get clean sudo rm -rf /var/lib/apt/lists/* sudo apt-get update Editing /etc/apt/sources.list with alternate official mirrors, such as deb http://mirrors.ocf.berkeley.edu/kali kali-rolling main non-free contrib or deb https://http.kali.org/kali kali-rolling main non-free contrib Everything worked after I first imported the VM. I ran sudo apt update and it found some ~650 packages to upgrade. I ran sudo apt upgrade and it encountered an error partway through. That error was solved using sudo apt --fix-broken install, but that is when this hash sum error began. Unfortunately due to hours of troubleshooting I no longer have the details of the earlier error, but I believe it was an error extracting a package due to corrupt data. I've tried multiple official mirrors, but I get the same error. Additionally, when I downloaded the Packages.gz file here on my Windows machine (VM host) and computed the SHA256 hash, I got the exact hash that apt printed as the expected value. This led me to believe that the error was not with the mirror but with my VM. The next thing I tried was wget https://mirrors.ocf.berkeley.edu/kali/dists/kali-rolling/main/binary-amd64/Packages.gz followed by sha256sum Packages.gz, which provided yet another different hash output. To be clear, I have seen 3 different hashes for the same file: The "correct" hash shown by apt as expected, which is the one that windows also produced after downloading the file using a browser The incorrect hash calculated by apt, which led to the error A different hash calculated by sha256sum after downloading the file using wget using the same URL as for the browser download I should also note that I have only been referencing the SHA256 hash in each step. The other hash functions are also mismatched when I run sudo apt update, but the file size is the same. I had considered that downloads might be failing due to limited disk space (it is a VM after all) but I don't think that's the case. What am I missing?
QUICK FIX: Shut down Kali VM. Run bcdedit /set hypervisorlaunchtype off in CMD. Reboot. EXPLANATION: This issue is caused by the Windows Hypervisor Platform. This issue cannot be resolved for now (as far as I know). A partial fix is at hand though. And I say "partial" because it involves disabling the platform (also known as "Hyper-V") which will probably break other virtualization solutions you have installed since this is enabled manually. Anyway, here's how to disable it and get your Kali VM running again; Shut down the Kali Virtual Machine. Press Windows logo key + X, then hit A to run Command Prompt as administrator. Type bcdedit /set hypervisorlaunchtype off When you see "The operation completed succesfully", type reboot After reboot, boot Kali and update/upgrade.
Kali Linux: apt update returns "Hash Sum mismatch" error
1,382,691,864,000
If I want make a md5sum list recursively, then I would use md5deep, but it starts to pop up some problems such as it won't generate the md5sum file in alphabetical order. For example, $ cd /media/sdcard/DCIM $ md5deep -rl * d41d8cd98f00b204e9800998ecf8427e 2014-12-01/IMG_1969.png c3a9d8cb047192a03b857023948a7ba6 2014-12-01/IMG_1971.png bd12c358db0c97230b9d48f67b2c0c98 2014-12-01/IMG_1970.png How to solve this problem?
You can just pass through sort: $ md5deep -rl * | sort -k2 d41d8cd98f00b204e9800998ecf8427e 2014-12-01/IMG_1969.png bd12c358db0c97230b9d48f67b2c0c98 2014-12-01/IMG_1970.png c3a9d8cb047192a03b857023948a7ba6 2014-12-01/IMG_1971.png If your file name can contain newlines or other strangeness, use this instead (assumes GNU sort): $ md5deep -0rl * | sort -zk2 | tr '\0' '\n' d41d8cd98f00b204e9800998ecf8427e 2014-12-01/IMG_1969.png bd12c358db0c97230b9d48f67b2c0c98 2014-12-01/IMG_1970.png c3a9d8cb047192a03b857023948a7ba6 2014-12-01/IMG_1971.png
How to make a list generated by md5deep in alphabetical order of relative paths?
1,382,691,864,000
Is there any fast method to delete duplicates of files based on any hash sum (i.e. SHA1 to be fast). Because I've got some mess in my music files.
There is package fdupes in linux (for example, it is present in debian repository). It uses md5sums and then a byte by byte comparison to find duplicate files within a set of directories. It also can delete dups with -d option, but I've never used that option. Also you can grep or sed from output files to delete and remove them from disk.
How to delete duplicates of files in directory and subdirs?
1,382,691,864,000
Given a file like: a b c How do I get an output like: a 0cc175b9c0f1b6a831c399e269772661 b 92eb5ffee6ae2fec3ad71c777531578f c 4a8a08f09d37b73795649038408b5f33 in an efficient way? (Input is 80 GB)
This could just be a oneliner in perl: head 80gb | perl -MDigest::MD5=md5_hex -nlE'say"$_\t".md5_hex($_)' a 0cc175b9c0f1b6a831c399e269772661 b 92eb5ffee6ae2fec3ad71c777531578f c 4a8a08f09d37b73795649038408b5f33 d 8277e0910d750195b448797616e091ad e e1671797c52e15f763380b45e841ec32 f 8fa14cdd754f91cc6554c9e71929cce7 g b2f5ff47436671b6e533d8dc3614845d h 2510c39011c5be704182423e3a695e91 i 865c0c0b4ab0e063e5caa3387c1a8741 j 363b122c528f54df4a0446b6bab05515 If you need to store the output and want a nice progress bar while chewing this huge chunk: sudo apt install pv #ubuntu/debian sudo yum install pv #redhat/fedora pv 80gb | perl -MDigest::MD5=md5_hex -nlE'say"$_\t".md5_hex($_)' | gzip -1 > 80gb-result.gz
Compute md5sum for each line in a file
1,382,691,864,000
I have a 660297728 byte HDD image with MD5 hash f5a9d398e974617108d26c1654fe7bcb: root@T42# ls -l image -rw-rw-r-- 1 noc noc 660297728 Sep 29 19:00 image root@T42# md5sum image f5a9d398e974617108d26c1654fe7bcb image Now if I dd this image file to /dev/sdb disk and check the MD5 hash of the disk, then it is different from MD5 hash of the image file: root@T42# dd if=image of=/dev/sdb bs=512 1289644+0 records in 1289644+0 records out 660297728 bytes (660 MB) copied, 1006.38 s, 656 kB/s root@T42# md5sum /dev/sdb f6152942a228a21a48c731f143600999 /dev/sdb What might cause such behavior?
Is /dev/sdb exactly 660297728 bytes large? (blockdev --getsize64 /dev/sdb). If not, the checksum would naturally be different. Use cmp image /dev/sdb to find out where the differences are in detail. If it says EOF on image, it's identical.
HDD image file checksum does not match with device checksum
1,382,691,864,000
Is there a way to store my password in /etc/wpa_supplicant/wpa_supplicant.conf as some hash instead of plaintext? By "password" I refer here to the password used for phase2 authentification. I do not refer to the Preshared Key (PSK) which could be hashed using wpa_passphrase. For phase2 MSCHAPv2 or MSCHAP authentification I could store the password as MD4 hash using nt_password_hash (see example wpa_supplicant.conf line 659). Is there any way to store my PAP password as hash? Or: Is there a way to store the password in some sort of external storage? The example wpa_supplicant.conf suggests the use of such an external storage (using ext:????) but I could not find any documentation about it. I am aware that storing the password as hash does not increase wifi security. But as the password MUST be the same as the password for other services (account management, subscriptions, payments ...) I don't want it to be stored as plaintext.
Unfortunately I have to answer the question myself now. "Unfortunately" because the answer is "No, it is not possible". I took a look at how PAP is working, and came to the conclusion that it is logically impossible to store the password as a hash value. With PAP, the username and password are sent directly to the authentification side. Therefore, the password must be known, knowing some hash is not sufficient. Thx @tink for searching nevertheless. But I still could not find anything about this external storage thing. Solution for me in this special case is using a unencrypted wifi (which is also provided by my university) and a VPN.
wpa_supplicant store password as hash (WPA-EAP with phase2="auth=PAP")
1,382,691,864,000
Is it possible to use kernel cryptographic functions in the userspace? Let's say, I don't have md5sum binary installed on my system, but my kernel has md5sum support. Can I use the kernel function from userspace? How would I do it? Another scenario would be, if I don't trust the md5sum binary on my system (my system could have been compromised), but I trust my kernel (I am using cryptographically signed kernel modules).
According to this article titled: A netlink-based user-space crypto API it would appear that what you're proposing is possible. I'm not sure how to answer your question any further than this article though.
Using kernel cryptographic functions
1,382,691,864,000
Why there are 12 md5 sums while there are only 5 ISOs in Debian's official site? http://cdimage.debian.org/debian-cd/current/i386/iso-dvd/ I just download the debian-8.5.0-i386-DVD-1.iso and check the md5 sum, which does not match the given value. Because the md5sum file in the above link has too many entries, therefore I doubt if it is my download error or their mistake, or I missed something..
Some Images are missing! Only the first n images are available! Where is the rest? We don't store/serve the full set of ISO images for all architectures, to reduce the amount of space taken up on the mirrors. You can use the jigdo tool to recreate the missing ISO images instead.
md5 sum in debian's official directory
1,382,691,864,000
I have 2 files test.txt and test.txt.md5. I would like to verify the checksum of test.txt. The gnu tool md5sum requires an md5 file with the following format "[md5-hash][space][space][filename]" (md5sum -c test.txt.md5). Unfortunately my test.txt.md5 only contains the md5 hash (without the spaces and filename). How can I pass the hash from the "test.txt.md5" file to the "md5sum -c" command? I guess I have to use the standard input however all examples I have seen try to recreate the md5sum file format The content of the files is: test.txt: test and test.txt.md5: d8e8fca2dc0f896fd7cb4cb0031ba249
Like many commands, md5sum has the ability to read from the standard input if an option's value is - (from man md5sum): Print or check MD5 (128-bit) checksums. With no FILE, or when FILE is -, read standard input. Since you know the file name, you could simply print the contents of your md5 file, a couple of spaces and then the name and pass that to md5sum: $ cat test.txt.md5 5a6d311c0d8f6d1dd03c1c129061d3b1 $ md5sum -c <(printf "%s test.txt\n" $(cat test.txt.md5)) test.txt: OK Another option would be to add the file name to your file: $ sed -i 's/$/ test.txt/' test.txt.md5 $ md5sum -c test.txt.md5 test.txt: OK
How to use md5sum for checksum with an md5 file which doesn't contain the filename
1,382,691,864,000
I have a file x1 in one directory (d1) and I'm not sure if the same file is already copied (x2) in another directory (d2) (but automatically renamed by application). Can I check if hash of file x1 from directory d1 is equal to hash of some file x2 existing in directory d2?
This is a good approach, but the search will be a lot faster if you only calculate hashes of files that have the right size. Using GNU/BusyBox utilities: wanted_size=$(stat -c %s d1/x1) wanted_hash=$(sha256sum <d1/x1) find d2 -type f -size "${wanted_size}c" -execdir sh -c 'test "$(sha256sum <"$0")" = "$1"' {} "$wanted_hash" \; -print
Find a file by hash
1,382,691,864,000
I apologize if this has already been answered, or if the answer is simpler than I realize, but I can't seem to figure out the following: When I try to generate an md5 from a string, either with echo -n "string" | md5sum | cut -f1 -d' ' or with echo -n "string" | openssl md5 the result is not 32 characters, as I would expect, but rather 33 (using wc -c). So, I have a few questions: Why do both md5sum and openssl add a trailing space? Is there another way to generate an md5 hash without a trailing newline or space? Does the trailing space really matter? Thank you all in advance.
It's 32 characters! The md5sum is adding a linefeed to the end. You can get rid of it like this: % echo -n string | md5sum|awk '{print $1}'|wc -c 33 % echo -n $(echo -n string | md5sum|awk '{print $1}')|wc -c 32 or you could do it like this: % echo -n $(md5sum <<< 'string'|awk '{print $1}')|wc -c 32 You can tell when one of the commands is adding a newline because the 32 character string will show up on its own line. If no newline is present it should always show up like this: [prompt %] echo -n $(md5sum <<< 'string'|awk '{print $1}') b80fa55b1234f1935cea559d9efbc39a[prompt %]
Trailing space when generating md5
1,382,691,864,000
I have a directory full of files. Each file will be copied to a specific type of destination host. I want to calculate an MD5 sum for each file in the directory, and store that md5 sum in a file that matches the name of the file that generated the sum, but with .md5 appended. So, for instance, if I have a directory with: a.bin b.bin c.bin The final result should be: a.bin a.bin.md5 # a.bin's calculated checksum b.bin b.bin.md5 # b.bin's calculated checksum c.bin c.bin.md5 # c.bin's calculated checksum I have attempted this with find exec, and with xargs. With find, I tried this command: find . -type f -exec md5sum {} + > {}.md5 Using xargs, I tried this command: find . -type f | xargs -I {} md5sum {} > {}.md5 In either case, I end up with a file called {}.txt, which isn't really what I am looking for. Could anyone point out how to tweak these to generate the md5 files I am looking to generate?
cd /path/to/files && for file in *; do if [[ -f "$file" ]]; then md5sum -- "$file" > "${file}.md5" fi done
Generate MD5sum for all files in a directory, and then write (filename).md5 for each file containing that file's MD5SUM
1,382,691,864,000
I've noticed that if I write an image via dd to a USB drive and then sha256sum that image; the sum changes. Why? It's never identical to that of the ISO. I am running: sha256sum /dev/sdb (on the block device, not the partition(s))
If your image is smaller than the USB drive then you need to make sure you read back just that size of data from the drive, otherwise all the remainder of the drive will be added into the sha256 and create a different result. e.g. $ ls -l tst.iso -rw-r--r-- 1 root root Jul 1 14:58 tst.iso $ /usr/bin/sha256sum tst.iso 49bc20df15e412a64472421e13fe86ff1c5165e18b2afccf160d4dc19fe68a14 tst.iso $ dd if=tst.iso of=/dev/sdg bs=1M 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB) copied, 200.066 s, 5.4 MB/s When we read this back we need to make sure we only read the 1,073,741,824 bytes we wrote. In this case I know it's exactly 1024 blocks of 1M each so I can specify a bs=1M count=1024. $ dd if=/dev/sdg bs=1M count=1024 | sha256sum 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB) copied, 37.8798 s, 28.3 MB/s 49bc20df15e412a64472421e13fe86ff1c5165e18b2afccf160d4dc19fe68a14 - Without the total bytes matching then the sha256 would be different.
Why does SHA 256 sum change when writing image to a drive?
1,382,691,864,000
I know that on Linux (at least debian) every password are hashed and stored in /etc/shadow. However thanks to the libpam-cracklib you can add some rules on passwords. For instance in /etc/pam.d/common-password you can set Difok which is a parameter that indicate the number of letter that can be the same between an old and a new password. But how linux can know when I type in an new password the similarity with my old pasword as it doesn't know my real password (it just have a hash)? Thanks !
When you ask a PAM module to change a password (or participate in changing a password), the module can retrieve both the new password and the old, as given by the user: as Christopher points out, passwd asks for the old password as well as the new (unless you’re running it as root and changing another user’s password). The module can use that information to compare both passwords, without having to somehow reverse the current hash or enumerate variants. The PAM functions involved include pam_sm_chauthtok and pam_get_item, whose documentation (and the other pages referenced there) should help you understand what’s going on. You can see how it’s done in libpam-cracklib’s source code.
How Linux can compare old and new password?
1,382,691,864,000
Two files containing same song, both in M4A format, output two different results when I calculate their hashes: md5sum f149e2d2a232a410fcf61627578c101a new.m4a ad26ed675342f0f45dcb5f9de9d586df old.m4a They contain same number of bytes: ls -l -rw-rw-r-- 1 cdc cdc 2978666 Jun 26 19:49 new.m4a -rwxrwxr-x 1 cdc cdc 2978666 Jun 26 19:49 old.m4a The exiftool output differs only in dates of creation (pastebin for new.m4a and pastebin for old.m4a). I used Audacity's tools to compare two files (by inverting and mixing them, which makes them nullify each other's similarities) and the result was silence, since there was nothing left, meaning there is no difference between two files. Command cmp gives me: cmp -l 54 375 23 55 51 305 56 41 112 58 375 23 59 51 305 60 45 116 170 375 23 171 51 305 172 41 112 174 375 23 175 51 305 176 41 112 270 375 23 271 51 305 272 41 112 274 375 23 275 51 305 276 41 112 cmp -b new.m4a old.m4a differ: byte 54, line 1 is 375 M-} 23 ^S The only real difference is that old.m4a file was downloaded in December, 2020, and new.m4a was downloaded few hours ago. If needed, you can download new.m4a here and old.m4a here. Originally, both were downloaded from artist's Bandcamp page. P.S. My human ear says that these two are identical.
The metadata as printed by exiftool is part of the file-data. I.e: $ diff <(exiftool old.m4a) <(exiftool new.m4a) 2c2 < File Name : old.m4a --- > File Name : new.m4a 18,19c18,19 < Create Date : 2020:12:31 18:13:30 < Modify Date : 2020:12:31 18:13:34 --- > Create Date : 2021:06:26 18:57:37 > Modify Date : 2021:06:26 18:57:41 32,33c32,33 < Track Create Date : 2020:12:31 18:13:30 < Track Modify Date : 2020:12:31 18:13:30 --- > Track Create Date : 2021:06:26 18:57:37 > Track Modify Date : 2021:06:26 18:57:37 40,41c40,41 < Media Create Date : 2020:12:31 18:13:30 < Media Modify Date : 2020:12:31 18:13:30 --- > Media Create Date : 2021:06:26 18:57:37 > Media Modify Date : 2021:06:26 18:57:37 (This is after making both files have same file dates - as in dates stored on disk - not the meata-data stored in the file) When md5 sum is created all data is used. As bytes differ, check-sum differ. That said; the audio data is the same. Besides the cmp you can do a hex dump to view the raw diff. For example (skip first 50 bytes, max 300 type hex ): $ diff <(od -j 50 -N 300 -t x1 old.m4a) <(od -j 50 -N 300 -t x1 new.m4a) 1c1 < 0000062 00 00 dc 13 c5 4a dc 13 c5 4e 00 00 ac 44 00 71 --- > 0000062 00 00 dc fd 29 21 dc fd 29 25 00 00 ac 44 00 71 8c8 < 0000242 68 64 00 00 00 07 dc 13 c5 4a dc 13 c5 4a 00 00 --- > 0000242 68 64 00 00 00 07 dc fd 29 21 dc fd 29 21 00 00 14,15c14,15 < 0000402 00 20 6d 64 68 64 00 00 00 00 dc 13 c5 4a dc 13 < 0000422 c5 4a 00 00 ac 44 00 71 bc 00 55 c4 00 00 00 00 --- > 0000402 00 20 6d 64 68 64 00 00 00 00 dc fd 29 21 dc fd > 0000422 29 21 00 00 ac 44 00 71 bc 00 55 c4 00 00 00 00 For example: OLD: 00 00 dc 13 c5 4a dc 13 c5 4e 00 00 ac 44 00 71 |___________||___________| --- NEW: 00 00 dc fd 29 21 dc fd 29 25 00 00 ac 44 00 71 |___________||___________| Then by converting to dates: m4a uses Apple Mac OS X HFS+ timestamp (number of seconds since midnight, January 1, 1904, GMT). old: dc 13 c5 4a => 3692283210 => Thursday, December 31, 2020 18:13:30 dc 13 c5 4e => 3692283214 => Thursday, December 31, 2020 18:13:34 new: dc fd 29 21 => 3707578657 => Saturday, June 26, 2021 18:57:37 dc fd 29 25 => 3707578661 => Saturday, June 26, 2021 18:57:41
Two supposedly same M4A files are giving different hash results. Why?
1,382,691,864,000
#!/bin/bash cd /path-to-directory md5=$(find . -type f -exec md5sum {} \; | sort -k 2 | md5sum) zenity --info \ --title= "Calculated checksum" \ --text= "$md5" The process of a recursive checksum calculation for a directory takes a while. The bash script doesn´t wait until the process is finished it just moves to the next command which is the dialog box, that displays the calculated checksum. So the dialog box shows a wrong checksum. Is there an option to tell the script to wait until the calculation of the checksum is finished? Furthermore, is there an option to pipe the progress of the checksum calculation to some kind of progress bar like in zenity for example?
As written, it would be waited for. For a pulsating progress bar: #! /bin/sh - export LC_ALL=C cd /path/to/dir || exit { md5=$( find . -type f -print0 | sort -z | xargs -r0 md5sum | md5sum ) exec >&- zenity --info \ --title="Checksum" \ --text="$md5" } | zenity --progress \ --auto-close \ --auto-kill \ --pulsate \ --title="${0##*/}" \ --text="Computing checksum" For an actual progress bar, you'd need to know the number of files to process in advance. With zsh: #! /bin/zsh - export LC_ALL=C autoload zargs cd /path/to/dir || exit { files=(.//**/*(ND.)) } > >( zenity --progress \ --auto-close \ --auto-kill \ --pulsate \ --title=$0:t \ --text="Finding files" ) md5=( $( zargs $files -- md5sum \ > >( awk -v total=$#files '/\/\// {print ++n * 100 / total}' | { zenity --progress \ --auto-close \ --title=$0:t \ --text="Computing checksum" || kill -s PIPE $$ }) \ | md5sum ) ) zenity --info \ --title=$0:t \ --text="MD5 sum: $md5[1]" Note that outside of the C locale, on GNU systems at least, filename order is not deterministic, as some characters sort the same and also filenames are not guaranteed to be made of valid text, hence the LC_ALL=C above. The C locale order is also very simple (based on byte value) and consistent from system to system and version to version. Beware that means that error messages if any will be displayed in English instead of the user's language (but then again the Computing checksum, Finding files, etc are not localised either so it's just as well). Some other improvements over your approach: Using -exec md5sum {} + or -print0 | xargs -r0 md5sum (or zargs equivalent) minimises the number of md5sum invocations, each md5sum invocation being passed a number of files. -exec md5sum {} \; means running one md5sum per file which is very inefficient. we sort the list of files before passing to md5sum. Doing sort -k2 in general doesn't work as file names can contain newline characters. In general, it's wrong to process file paths line-based. You'll notice we use a .// prefix in the zsh approach for awk to be able to count files reliable. Some md5sum implementations also have a -z option for NUL-delimited records.
Wait until md5sum command is completed
1,382,691,864,000
I have a bunch of files and for each row there is a unique value I'm trying to obscure with a hash. However there are 3M rows across the files and a rough calculation of the time needed to complete the process is hilariously long at 32days. for y in files*; do cat $y | while read z; do KEY=$(echo $z | awk '{ print $1 }' | tr -d '"') HASH=$(echo $KEY | sha1sum | awk '{ print $1 }') sed -i -e "s/$KEY/$HASH/g" $y done done To improve this processes speed I assume I'm going to have to introduce some concurrency. A hasty attempt based of https://unix.stackexchange.com/a/216475 led me to N=4 ( for y in gta*; do cat $y | while read z; do (i=i%N)); ((i++==0)); wait ((GTA=$(echo $z | awk '{ print $1 }' | tr -d '"') HASH=$(echo $GTA | sha1sum | awk '{ print $1 }') sed -i -e "s/$KEY/$HASH/g) & done done ) Which performs no better. Example input "2000000000" : ["200000", "2000000000"] "2000000001" : ["200000", "2000000001"] Example output "e8bb6adbb44a2f4c795da6986c8f008d05938fac" : ["200000", "e8bb6adbb44a2f4c795da6986c8f008d05938fac"] "aaac41fe0491d5855591b849453a58c206d424df" : ["200000", "aaac41fe0491d5855591b849453a58c206d424df"] Perhaps I should read the lines concurrently then perform the hash-replace on each line?
FWIW I think this is the fastest way you could do it in a shell script: $ cat tst.sh #!/usr/bin/env bash for file in "$@"; do while IFS='"' read -ra a; do sha=$(printf '%s' "${a[1]}" | sha1sum) sha="${sha% *}" printf '%s"%s"%s"%s"%s"%s"%s"\n' "${a[0]}" "$sha" "${a[2]}" "${a[3]}" "${a[4]}" "$sha" "${a[6]}" done < "$file" done $ ./tst.sh file $ cat file "e8bb6adbb44a2f4c795da6986c8f008d05938fac" : ["200000", "e8bb6adbb44a2f4c795da6986c8f008d05938fac"]" "aaac41fe0491d5855591b849453a58c206d424df" : ["200000", "aaac41fe0491d5855591b849453a58c206d424df"]" but as I mentioned in the comments you'd be better of for speed of execution using a tool with sha1sum functionality built in, e.g. python.
Concurrency of a find, hash val, and replace across large amount of rows
1,382,691,864,000
I want to delete same file with different names scattered in folders. This command works fine for searching and listing the files. Then I manually delete the files. Is it possible to add delete option to the below command ? find /folder -type f -exec md5sum {} + | grep '^aafa26a6610d357d8e42f44bc7e76635'
try find ... | awk '{$1 = "rm" ; print } ' | bash this will replace actual md5sum (aaf...) by rm. this will not work if filename have a special character in it, neither if file is write protected (replace rm by rm -f ).
Find files based on MD5 and delete
1,382,691,864,000
I am storing some multi-GB files on two hard drives. After several years in offline storage (unfortunately in far from ideal conditions), I often get some files with bit-rot (the two copies differ), and want to recover the file. The problem is, the files are so big, that within the same file, on some storage devices one bit gets rotten, whereas on another one a different bit gets bit-rotten, and so neither of the disks contains an uncorrupted file. Therefore, instead of calculating the MD5 checksums of the entire files, I would like to calculate these checksums of each 1KB-chunk. With such a small chunk, there is a lot less chance that the same 1KB-chunk will get corrupted on both hard drives. How can this be done? I am sure it shouldn't be hard, but I spent over an hour trying different ways, and keep failing.
I am not offering a complete solution here, but rather I'm hoping to be able to point you along the way to building your own solution. Personally I think there are better tools, such as rsync, but that doesn't seem to fit the criteria in your question. I really wouldn't use split because that requires you to be able to store the split data as well as the original. Instead I'd go for extracting blocks with dd. Something like this approach may be helpful for you. file=/path/to/file blocksize=1024 # Bytes per block numbytes=$(stat -c '%s' "$file") numblocks=$((numbytes / blocksize)) [[ $((numblocks * blocksize)) -lt $numbytes ]] && : $((numblocks++)) blockno=0 while [[ $blockno -lt $numblocks ]] do md5sum=$(dd bs=$blocksize count=1 skip=$blockno if="$file" 2>/dev/null | md5sum) # Do something with the $md5sum for block $blockno # Here we write to stdout echo "$blockno $md5sum" : $((blockno++)) done
How to separately checksum each "block" of a large file
1,382,691,864,000
I have been using md5deep for a very long time, more than 10 years. It is a natural "go to" tool for me since it offers recursion, matching and missing modes, and even a triage which I do like. I know about and have used the newer tool, hashdeep and have both installed on at least one machine. I noticed I had differing versions on different boxes and didn't think much of it until I installed something else yesterday and noticed that md5deep was being "held back". Unsure why and quick research didn't find a dependancy issue, so I upgraded it. As a result hashdeep was installed (no problem, like I say, I have used it) but although it "appears" md5deep wasn't removed, it certainly feels that way. me@home:~$ sudo apt-get install md5deep Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: hashdeep The following NEW packages will be installed: hashdeep The following packages will be upgraded: md5deep 1 to upgrade, 1 to newly install, 0 to remove and 105 not to upgrade. Need to get 0 B/119 kB of archives. After this operation, 1,123 kB disk space will be freed. Do you want to continue? [Y/n] (Reading database ... 487441 files and directories currently installed.) Preparing to unpack .../archives/md5deep_4.4-2_all.deb ... Unpacking md5deep (4.4-2) over (4.2-1) ... Selecting previously unselected package hashdeep. Preparing to unpack .../hashdeep_4.4-2_amd64.deb ... Unpacking hashdeep (4.4-2) ... Processing triggers for man-db (2.7.4-1) ... Setting up hashdeep (4.4-2) ... Setting up md5deep (4.4-2) ... me@home:~$ sudo find / -name md5deep me@home:~$ As can be seen, it appears that no package was removed, 1 was installed (hashdeep) and one was upgraded (md5deep). But it appears as though it's not even there. I thought maybe it might be a wrapper for hashdeep but it's no longer available on my system at all. It actually looks like it HAS been removed. I don't have a problem with upgrading to a newer version, even if it has a new name now, but if it had of been clear that it would remove the old one I would have done it differently. I didn't want to run dual hashes over TBs of data, my assumption is it would take considerably longer and md5 was fine. I have done further testing with hashdeep and have to admit that I do like it, although I wouldn't quite say yet that I prefer it. I have a number of hash files that are single hashes (ie md5 as opposed to both md5 and sha1). In researching downgrading packages, I found this post: https://askubuntu.com/questions/138284/how-to-downgrade-a-package-via-apt-get however when I run this, I only get the current version: $ apt-cache showpkg md5deep Package: md5deep Versions: 4.4-2 (/var/lib/apt/lists/au.archive.ubuntu.com_ubuntu_dists_wily_universe_binary-amd64_Packages) (/var/lib/dpkg/status) Description Language: File: /var/lib/apt/lists/au.archive.ubuntu.com_ubuntu_dists_wily_universe_binary-amd64_Packages MD5: 03e121f5deb42145602b68fdf028531d Description Language: en File: /var/lib/apt/lists/au.archive.ubuntu.com_ubuntu_dists_wily_universe_i18n_Translation-en MD5: 03e121f5deb42145602b68fdf028531d Reverse Depends: hashdeep:i386,md5deep 4.4-1~ hashdeep:i386,md5deep 4.4-1~ krusader,md5deep hashdeep,md5deep 4.4-1~ hashdeep,md5deep 4.4-1~ Dependencies: 4.4-2 - hashdeep (0 (null)) Provides: 4.4-2 - Reverse Provides: hashdeep 4.4-2 Question Without uninstalling hashdeep, am I able to bring back a functioning md5deep to my system?
According to /usr/share/doc/hashdeep/README.md.gz, it's all one executable that acts differently depending on the name of the called program. If the program is called md5deep, it acts like md5deep. I don't use it myself, but if I'm reading the docs right, you should be able to create a symlink to it that will produce the behavior you expect. Do the following (as root / sudo / whatever): ln -s /usr/bin/hashdeep /usr/local/bin/md5deep
how to bring back md5deep
1,382,691,864,000
With the commands md5sum, sha1sum, sha256sum I can take a text file having an hash and a path per line and verify the entire list of files in a single command, like sha1sum -c mydir.txt. (Said text file is easy to produce with a loop in find or other.) Is there a way to do the same with a list of CRC/CRC32 hashes? Such hashes are often stored inside zip-like archives, like ZIP itself or 7z. For instance: $ unzip -v archive.zip Archive: archive.zip Length Method Size Cmpr Date Time CRC-32 Name -------- ------ ------- ---- ---------- ----- -------- ---- 8617812 Stored 8617812 0% 12-03-2015 15:20 13fda20b 0001.tif Or: $ 7z l -slt archive.7z Path = filename Size = 8548096 Packed Size = Modified = 2015-12-03 14:20:20 Attributes = A_ -rw-r--r-- CRC = B2F761E3 Encrypted = - Method = LZMA2:24 Block = 0
Try RHash Try RHash. There are packages for Cygwin, Debian. Example $ echo -n a > a.txt; echo -n b > b.txt; echo -n c > c.txt ✔ $ rhash --crc32 --simple *.txt > checksums.crc32 ✔ $ cat checksums.crc32 e8b7be43 a.txt 71beeff9 b.txt 06b9df6f c.txt ✔ $ rhash --crc32 --check checksums.crc32 --( Verifying checksums.crc32 )------------------------------------------------- a.txt OK b.txt OK c.txt OK -------------------------------------------------------------------------------- Everything OK ✔ Note 1: --simple format If you don't use the --simple formatting option then rhash will default to a different format. And this may not be what you want: $ rhash --crc32 *.txt ; Generated by RHash v1.3.7 on 2020-06-03 at 16:02.51 ; Written by Kravchenko Aleksey (Akademgorodok) - http://rhash.sf.net/ ; ; 1 15:58.36 2020-06-03 a.txt ; 1 15:58.36 2020-06-03 b.txt ; 1 15:58.36 2020-06-03 c.txt a.txt E8B7BE43 b.txt 71BEEFF9 c.txt 06B9DF6F ✔ Note 2: --all option If you wanna go crazy: try the --all option to get ALL supported hashes at once.
Command to verify CRC (CRC32) hashes recursively
1,382,691,864,000
I have a ext3 filesystem on a .img file. After mounting and unmounting it, I noticed that the md5sum is changed, even if no file inside was changed! md5sum myfilesystem.img XXXX myfilesystem.img mount -t ext3 myfilesystem.img temp/ umount temp/ md5sum myfilesystem.img YYYY myfilesystem.img Why does XXXX differs from YYYY? I clearly didn't touch anything inside myfilesystem.img.
Because, if you mount the ext3 in writable mode, there are a few things that get updated, like the last mount date. Try if this also happens when you mount with -o ro.
md5sum change after mount?
1,382,691,864,000
I have a .deb file and its SHA1 checksum information. How do I check the .deb file's authenticity using this checksum before installing? There's many entries on Google for "how to verify checksums for installed packages" which is mind-boggingly useless yet none for checking BEFORE installing. Bonus points if you can explain why people are checking files AFTER installing them.
To check whether your package matches the SHA1 sum you have, run sha1sum /path/to/package.deb and compare the output. If you have the sum in a file of the form sum package.deb you can run sha1sum -c shafile to check the sum directly. To determine the authenticity of the package, you’ll need to determine the authenticity of the SHA1 sum. People check MD5 sums after installation for a variety of reasons; the one that makes sense in my opinion is to check for involuntary corruption (e.g. after disk errors, or an operator error). The MD5 sums available after installation are shipped in the package and are stored locally, so they don’t provide any external authentication.
SHA1 verification of external deb package before install
1,382,691,864,000
I'm trying to write a bash script which, among several other things, will "log in" as a user (preferably with su if possible) without needing to interact with the password prompt, i.e., will automatically input the password. I only need it to perform one command and then the session can end. This is for an "automatic checker" for an introductory Linux class I'm teaching. The students have an exercise where they create usernames with specific passwords on their Raspberry Pis (running Raspbian), and I want this script to automatically verify the passwords are correct. The students run the "automatic checker" on their Pis and I verify the output, so they have a chance to fix any errors before turning in. Since this is for a classroom exercise with dummy passwords, the security of these passwords is completely irrelevant. I know there are already a few questions about automatic logins, but none of them work for me. The closest I've come to a solution is getting the su command to say a terminal is required. Using "expect" will not work because it will require installing packages on the students' Pis. If there are other ways to verify the passwords are correct then I'm open to those. I don't think I can do a hashed password comparison due to the salt.
if you have root : salt=$(awk -F\$ '$1 ~ /student:/ { print $3 }' /etc/shadow) hashedpasswd=$(awk -F: '$1 == "student" { print $2} ' /etc/shadow) expected=$(mkpasswd -m sha-512 given-passwd $salt) if [ "$hashedpasswd" = "expected" ] then echo good else echo bad fi replace student by revellant string of course. replace given-passwd as well. see my question for more details : /etc/shadow : how to generate $6$ 's encrypted password?
Automatically verify passwords for classroom exercise? [duplicate]
1,382,691,864,000
(From a novice's point of view) The other day I was thinking about how a typical "passwd" command works in a LINUX OS. For example, when we type in "passwd", a prompt appears letting us type in our password, and then it saves that password wrapping up with cryptographic algorithms and then saves in /etc/shadow. So I came with a "Password/login emulation" of my own. Initially it saves the username along with their password in a file named mango.txt in the form of "username::password", and next time the same user tries to log in, it asks for the username and password. So I came up with these two scripts. Script 1: Prompts for a user-name and a password and saves it in a file a called mango.txt. # Title: username.sh #!/bin/bash # What I'm planning to do here is that, #create a username script which allows a #user to add themselves by puting in #their #names # and their password at the time of #login, it will save itself to a file #with #the username and password. # If username already exists, tells the #user that a user with the same name #exits, else add the new user. # along with a password. The password is # saved in a md5 hash form. exec 2>/dev/null touch mango.txt echo -n "Enter username: " read usame if [ "$usame" == "" ]; then echo -e "Username can not be blank\n" ./username.sh else grep -q $usame mango.txt if [ "$?" == 0 ]; then echo -e "A username with the same name already exists\n" ./username.sh else echo -n "Password: " read -s -p "Password: " passwd while true; do if [ "$passwd" == "" ]; then echo -e "Password can not be blank\n" else echo $usame::$(echo $passwd | md5sum) >> mango.txt echo -e "\nUser $usame added\n" break fi done fi fi Script 2: If this could be added in "bash.bashrc", then it would run at each terminal startup, and ask for the username and password. If username and password chinkies with that in mango.txt, then it lets the user login, else terminal exits (; Plain passwords are compared in like md5sum form with the mango.txt file passwords. #Title: login.sh # A simple login bash script #trap interrupts your keyboard if you #press ctrl+z or ctrl+c trap '' INT TSTP read -p "Enter username: " usname grep -q $usname mango.txt if [ "$?" -gt 0 ]; then echo "Username not found" sleep 1 pkill -9 bash #That's a bit too much I guess, but oh well else read -s -p "Password: " password if [ "$password" == "" ]; then echo "Password can not be blank" ./login.sh else #saves the password in md5sum format in tmp.txt echo $password | md5sum > tmp.txt tmp="$(cat tmp.txt)" #if the md5 hashes match, then allow login saying yo cat mango.txt | grep -q $usname::$tmp if [ "$?" == 0 ]; then echo -e "\nyo" #else print login failed else echo -e "\nLogin failed" sleep 1 pkill -9 bash fi fi fi rm tmp.txt # Deletes the tmp file afterwards I'm pretty sure it's nowhere near how that exactly works in a LINUX system(not to mention the cryptographies like ccrypt and scrypt and different salting mechanisms), but it's as best as I could come up with..perhaps a little nudge to the right direction as to how that actually works would be great from the experts. (: The encryption mechanism is what I'm super curious about.
You would use a slow, salted, secure hash function: key derivation function. We use a hash function so that the password is hidden, no one can read it, not even the admin. Hashes can not be reversed. When the user logs in we hash there password-input, and compare with the stored hash. Salting is to add a large random string to the password, before hashing. We have to store the salt with the hash. We do that to slow-down dictionary attacks. A dictionary attack is to hash a dictionary of known-common-passwords, and look for matches. Now the attacker needs to create a dictionary for each user (as they all have a unique salt). We use a slow hash, to farther slow the dictionary attack. At the expense of compute time, each time a user logs in. Your can read more at https://en.wikipedia.org/wiki/Hash_function https://en.wikipedia.org/wiki/Secure_Hash_Algorithms https://en.wikipedia.org/wiki/Cryptographic_hash_function#Password_verification https://en.wikipedia.org/wiki/Key_derivation_function https://www.youtube.com/watch?v=b4b8ktEV4Bg https://www.youtube.com/watch?v=DMtFhACPnTY For what is used on some Gnu/Linux systems, see this related question https://crypto.stackexchange.com/questions/40841/what-is-the-algorithm-used-to-encrypt-linux-passwords Editing /etc/shadow -- don't do it. https://unix.stackexchange.com/a/81248/4778
What encryption mechanism is used to store passwords in `/etc/shadow` in a typical Unix, such as Gnu/Linux?
1,382,691,864,000
Sometimes I use an unreliable medium (flash) to store a good deal of data. To at least recognise bit flips I store a file with the md5sums alongside. This file is usually created by a variation of find -type f -exec "{}" \; >MD5SUM. Later I copy some more files on it and now I would like to add the checksums of the new files without having to recalculate the old ones. Sadly, the time of some the machines I use are screwed, so using find -newer <file> -exec md5sum "{}"\; >>MD5SUM is not an option. Basically I would like to get the difference between the file list created by find -type f and the list in the MD5SUM file. Any ideas how to do this in an easy and elegant manner? Thanks in advance!
If this is going to be an on-going process, then you'll need two files, the old and new (which would become the old for next time). #!/bin/sh # change directory to either first argument or to current directory cd ${1:-"."} || exit 1 # if cannot cd, then exit # get the md5 values for all the files in the directory tree find . -type f -not -name .md5sum.last -exec md5sum {} \; | sort > .md5sum.tmp # if called before, then get only the differences in the newer if [ -f .md5sum.last ]; then comm -13 .md5sum.last .md5sum.tmp else # otherwise show all the output cat .md5sum.tmp fi # replace the older with the current for next time mv .md5sum.tmp .md5sum.last The sort and comm -13 are the key. Sort is obvious, but comm (short for "common") will show lines that are in the first file (column 1), second file (column 2) or both (column 3). The -13 option says to "take away column one and three" leaving only lines that are not in just the older and not common to both. Unfortunately, if you cannot trust the time stamps on the files, then this would be a very intensive process for large directory trees.
How do I easily update list of md5sums?
1,382,691,864,000
I have been using Argon2 in my terminal (Debian), but I keep messing up, and I have been unable to find the manual or any other documentation that lists examples of commands that work. Could someone give me a basic rundown of what the most important commands are? Or point me to a good reference? I have the usage right in front of me, but that does not help me much. Usage: argon2 [-h] salt [-i|-d|-id] [-t iterations] [-m log2(memory in KiB) | -k memory in KiB] [-p parallelism] [-l hash length] [-e|-r] [-v (10|13)] Password is read from stdin Parameters: salt The salt to use, at least 8 characters -i Use Argon2i (this is the default) -d Use Argon2d instead of Argon2i -id Use Argon2id instead of Argon2i -t N Sets the number of iterations to N (default = 3) -m N Sets the memory usage of 2^N KiB (default 12) -k N Sets the memory usage of N KiB (default 4096) -p N Sets parallelism to N threads (default 1) -l N Sets hash output length to N bytes (default 32) -e Output only encoded hash -r Output only the raw bytes of the hash -v (10|13) Argon2 version (defaults to the most recent version, currently 13) -h Print argon2 usage
You can run: echo -n "hashThis" | argon2 saltItWithSalt -l 32 which will give you an output like with multiple lines of information about the resulting hash. To get a one-line return of the same information encoded into a single argon2 string, you can add -e at the end. If you just want the raw bytes, use -r instead of -e.
Argon2 Commands in the Terminal
1,382,691,864,000
Is there a hash-based filesystem, where: There is a store of blocks (perhaps 512b, 4KB or 128KB) indexed by the hash of their contents. Each block has a usage count. When it reaches zero, the block's storage is freed. Files are just a length and a list of the block hashes. This would enable many optimisations, such as: Large files can be copied almost for free (in terms of both time and required storage). Copies of large files use Copy-on-Write to store alterations with minimal disk usage. File equality becomes quick to calculate. Does such a filesystem already exist? If not, is it not feasible or is it not a good idea?
It sounds like you're talking about a copy-on-write filesystem with deduplication. Both ZFS and Btrfs work like this to some degree. Btrfs has offline deduplication tools that can merge duplicate blocks some time after they've been written. ZFS can do online deduplication. Is online deduplication a good idea? It depends on your use case, but probably not. According to the Wikipedia article for ZFS, "effective use of deduplication may require large RAM capacity; recommendations range between 1 and 5 GB of RAM for every TB of storage." Offline deduplication is probably practical in many more cases.
Hash-based filesystem?
1,382,691,864,000
I'm trying to pass an MD5 password to chpasswd but it doesn't seem to work. echo username:$(openssl passwd -1 -salt salt password) Then I try to pass this to chpasswd to change the password echo 'username:$1$salt$aldkjflsfj' | /usr/sbin/chpasswd -e However, when I do this the password change does not seem to take effect -- /etc/shadow is updated but if I try to use the password it does not work. This does work: echo username:password | /usr/sbin/chpasswd passwd also works More info: $ S=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 5 | head -n 1) $ echo username:$(openssl passwd -1 -salt "$S" password) username:$1$nPkvS$eKljAIRsFLXOffXti.ZtO/ $ echo 'username:$1$nPkvS$eKljAIRsFLXOffXti.ZtO/' | chpasswd -e $ grep username /etc/shadow username:$1$nPkvS$eKljAIRsFLXOffXti.ZtO/:16722:0:99999:7:::
The arguments must be quoted, else the shell validates special characters inside those arguments: echo "username:"$(openssl passwd -1 -salt "$salt" "$password") Use double quote here, that the shell evaluates the variables. Now, the echo command must be quoted too: echo 'username:$1$salt$aldkjflsfj' | ... Use single quotes here, because the $-signs are part of the entry and must persist. Warning: I don't recommend to change password like this. Those commands, and therefore the plain passwords, can be seen in the listings of ps and top for example. openssl has a mechanism for such cases, to read the password from a file.
chpasswd and openssl
1,382,691,864,000
This question is very much related to this one. I want to log into a user and password protected wifi which uses PEAP and maybe MS-CHAPv2. My wpa_supplicant.conf has to contain an entry like this: network={ ssid="<somessid>" key_mgmt=WPA-EAP eap=PEAP identity="<someidentity>" password="<somepassword>" } Now, I do not want to get displayed the password (but rather have it typed in once by a buddy who knows it – I do and shall not know it), so I also do not want to have it stored in plaintext. Is it possible to replace the password="<somepassword>" entry by a hashsum of the password, preferably generated without the password being shown? If so, how can I do it? Do I have to create an additional hash if MS-CHAPv2 is used? (In the other answer I read something about NtPasswordHash, which didn’t yield much duckduckgo-results and couldn’t be found in the wpa_supplicant.conf man pages as suggested in the other question.) Alternatively: Is it possible to let a buddy type in his log-in data (i.e. identity and password) only once to let me use his account only once for internettt access?
No, you cannot replace the password by a hash. It doesn't matter what the protocol is. The client needs to know the password, and then either it sends the password to the server, or it sends some data that proves that the client knows the password. The server can be content to know the hash of the real password, because when it receives a candidate password, it computes the hash of the candidate and compares it with the real hash. But the client has to come up with the real password. Whatever you store in this file, in the end, the wpasupplicant program has to be able to reconstruct the password. This means that you can reconstruct the password. Your buddy cannot prevent you from learning the password unless he doesn't give you the password. As soon as your buddy has typed his password on your computer, you can retrieve the password if you want. You can modify the program that your buddy types the password in to write it in a file, or you can inspect the program's memory afterwards. If your buddy types his password on your computer, he has to trust you not to use it in any way that you promised not to use. It's like if your buddy lends you his car and asks you to park it: he can't prevent you from taking it for a ride, he only has your word that you won't drive it further than the car park. If you want to share accounts, you'll have to share the password. If you don't want to share accounts, you'll need to get your own account with its own password.
Store password as hash in wpa_supplicant.conf? [duplicate]
1,591,870,767,000
I am dealing with transferring large files from one machine to another (600GB+) and I'm tarring them up using tar -cpvzf file.tar.gz -C PATH_TO_DIR DIR Once finished with the tarring process, the following is done: split -d -b 2G file.tar.gz file_part_ This creates a bunch of file_part_00, file_part_01, ... until the whole file is split into 2GB chunks. Before transferring the file, I loop through each part in the directory the tar was split and collect their md5 hashes using an equivalent to: md5sum PART_NAME >> list_md5.start Once each part has been hashed, I do the following: sort -u list_md5.start (This sorts them and remove duplicates, just to be safe ya know) The parts are then transferred one by one in the order they're in the list_md5.start. Once they arrive on the other computer, their md5 hash is collected using the same method but in a different list let's call it list_md5_2.start. After the transfer, before putting the parts back together, I run the following: diff list_md5.start list_md5_2.start If no difference is found, I continue to the next part. Otherwise, I give up and delete all the parts. When it comes to putting them back together I do the following: cat file_part_* > file.tar.gz.incomplete (The incomplete is there because I have a watchdog waiting to untar any .tar.gz it comes across). Once the cat is done, the file is renamed using: mv file.tar.gz.incomplete file.tar.gz At this point, the watchdog detects it and untars it using: tar -C DEST -xzf file.tar.gz --totals --unlink-first --recursive-unlink At this point, I get an error I can't debug: Tar Failed 2 gzip: stdin: unexpected end of file tar: Unexpected EOF in archive tar: Unexpected EOF in archive tar: Error is not recoverable: exiting now /PATH/TO/DEST After untarring, the tar is removed regardless if it failed or not (No point in keeping large files that failed to untar). It is worth noting that sometimes the md5sum don't match up which also results in stopping the process (this is checked before the cat assembling step). I have tried ensuring the names were not invalid. I've tried changing the part size to smaller sizes. I've tried manually going through the process and still either got an issue with a mismatch in md5sum or the EOF error. This is all done on Ubuntu machines which have both been updated (No update pending). Does anyone have an idea as to how to solve this issue?
The problem was solved by adding additional storage space. To be specific, I added a 2TB hdd which is used to hold the tar while it's split on it. Originally, the whole process was done on a 6TB hdd with other large files on it giving us at most 3TB storage space to work with. The issue was noticed when we had something large downloading in the background which took up most of the space reproducing the broken tar issue from the question. This solution is probably not the most elegant solution, but removing the original file after being tarred would involve significant overhead which would have taken more time than simply adding additional storage space. In case someone stumbles upon this question, and is going to go through the same route as me, here's the steps I followed to add the new hdd: https://askubuntu.com/a/125277/ I would like to point out someone did suggest checking if the storage space is sufficient and I would like to give them credit. Here's suggestion: Make sure that your first tar -cpvzf ... is running without any errors, that the return code (echo $?) is 0 and that the disk space is sufficient. – Cyrus Aug 16 at 19:37 However, this suggestion was incomplete as at the time, there was enough storage space for tarring so it never returned any errors.
Failing to untar
1,591,870,767,000
Do MD5 checksums contain a checkbit? I have to copy some MD5 checksums by hand (there's no other way) and was wondering whether there is any code out there that can validate a checksum as being valid in the same way one can validate a credit card number. Just to be clear, I'm not asking how to generate an MD5 sum from a file so that I can compare it with the sum I've been given, I'm asking if it's possible (and I doubt it is) to validate that an MD5 sum is a genuine MD5 sum without actually making any reference back to the bytes that have been used to generate the sum. I want to identify a possible typo.
Basically, it does not have any checksum bit. To identify a typo, you might try sharing an a checksum (for example, MD5) of your MD5 sum over the same channel and check it.
Sanity checking MD5 sums
1,591,870,767,000
I am trying to validate a file using the following command: $ md5sum myfile_v2.1.ova -c myfile_v2.1.md5 md5sum: myfile_v2.1.ova: no properly formatted MD5 checksum lines found myfile_v2.1.ova: OK The exit status code seems to be 1 $ echo $? 1 However when reading the help for the md5sum command, strict mode (which fails upon formatting issues`) should be explicitly enabled; $ md5sum --help | grep -- --strict --strict exit non-zero for improperly formatted checksum lines Why on top of all that I get an OK about the file?
The correct way to verify checksums in an MD5 checksum file is $ md5sum -c file.md5 In your case: $ md5sum -c myfile_v2.1.md5 This will read the pathname and checksums in the MD5 file and check them against the corresponding files on disk. Your command line: md5sum myfile_v2.1.ova -c myfile_v2.1.md5 This is equivalent of md5sum -c myfile_v2.1.ova myfile_v2.1.md5 (due to the way some GNU utilities move command line options to the start of the argument list). This means "check the signatures found in these two MD5 checksum files". Since the first file isn't an MD5 checksum file, it complains. Also note that it correctly verifies one checksum from the MD5 file.
md5sum warning about formatted lines
1,591,870,767,000
I am running my below shell script on machineC which gets the md5sum of my files in my PRIMARY directory in machineC itself. #!/bin/bash export PRIMARY=/data01/primary for entry in "$PRIMARY"/* do local_md5sum=$(/usr/bin/md5sum "$entry" | awk '{print $1}') echo $local_md5sum done As soon as I run above shell script then it prints out the md5sum of my files in machineC and it is working fine. Now same file for which I am calculating the md5sum can be either in machineA or machineB as well so I need to do ssh on machineA and machineB and do the same md5sum on the same file and store it in remote_md5sum variable. If the file is not there in machineA, then it should be there in machineB for sure and the files will be in this directory in machineA and machineB /bat/test/data/snapshot/20140918 So I got below shell script which I am running on machineC and is also trying to find the md5sum of the files on machineA or machineB #!/bin/bash # folder in machineC export PRIMARY=/data01/primary readonly SERVERS=(machineA machineB) export SERVERS_1=${SERVERS[0]} export SERVERS_2=${SERVERS[1]} export FILES_LOCATION=/bat/test/data/snapshot/20140918 for entry in "$PRIMARY"/* do # find local md5sum on machineC local_md5sum=$(/usr/bin/md5sum "$entry" | awk '{print $1}') echo $local_md5sum # find remote md5sum of the file which will be on machineA or machineB remote_md5sum=$(ssh user@$SERVERS_1 /usr/bin/md5sum "$entry" | awk '{print $1}' || ssh bullseye@$SERVERS_2 /usr/bin/md5sum "$entry" | awk '{print $1}') echo "Remote Checksum: $remote_md5sum" # now compare local_md5sum and remote_md5sum done But whenever I run above shell script my ssh command fails and it doesn't store md5sum value of that file in remote_md5sum. Is there anything wrong in this syntax? remote_md5sum=$(ssh user@$SERVERS_1 /usr/bin/md5sum "$entry" | awk '{print $1}' || ssh user@$SERVERS_2 /usr/bin/md5sum "$entry" | awk '{print $1}')
I have modified your script and this one works now. I have added some comments inside the script to make it more understandable. Let me know if you need more help. #!/bin/bash #The export path which we set here. export PRIMARY=/home/ramesh #The main for loop execution starts here. for entry in "$PRIMARY"/* do #Get the base name of the file which we check in the remote servers. #Get just the filenames without the path. #I am going to use the filename in the remote server to check. filename=$(basename "$entry") echo "File Name: $filename" #Calculate the MD5Sum locally. local_md5sum=$(md5sum "$entry") echo "Local MD5Sum: $local_md5sum" #Check if the file exists in server1. #Otherwise I can check in the other server. if ssh ramesh@server1 stat /home/ramesh/'$filename' \> /dev/null 2\>\&1 then #I have the file in server1 and so I get the md5sum from server1. #I store the md5sum inside remote_md5sum variable. remote_md5sum=$(ssh ramesh@server1 "cd /home/ramesh/; find -name '$filename' -exec md5sum {} \;") else #Now, I know the file is in server2 as it is not present in server1. remote_file=$(ssh ramesh@server2 "cd /home/ramesh/; find -name '$filename' -exec md5sum {} \;") fi echo "Remote MD5Sum: $remote_file" done Testing I wanted to test the above script for file names with spaces as well. It works well and this is the output that I get when I execute the script. File Name: file1 Local MD5Sum: 39eb72b3e8e174ed20fe66bffdc9944e /home/ramesh/file1 Remote MD5Sum: b5fc751f836c5430b617bf90a8c4725d ./file1 File Name: file with spaces Local MD5Sum: 36707e275264f4ac25254e2bbe5ef041 /home/ramesh/file with spaces Remote MD5Sum: 36707e275264f4ac25254e2bbe5ef041 ./file with spaces
How to find md5sum of files on remote machines by doing ssh?
1,591,870,767,000
I was looking at my companies hashing implementation and I saw that the passwords for the root user are stored in base64 encoding. What is to stop anyone from simply copying the password and doing a base64 -d on it? Or for someone to do the same thing to any other base64 password? Am I missing something here?
A hash isn't an encoding, it's a non-reversible mathematical transformation. You can't take a hash and reverse the calculation to find the original value: you can only try brute-forcing a hash by trying passwords and hashing them to see if the hash matches. Passwords are typically stored as a base64-encoded, salted hash of the password. It's the hash part that makes the password unrecoverable (and the salt part means that pre-generated lists of hashes are useless). Ideally the hash itself uses a lengthy calculation, so that brute-forcing is more expensive (we don't care about the cost of hashing in the nominal case since you only need to run one hash to determine whether a user-supplied password is valid). For more information, see How to securely hash passwords?
What is the point of hashing?
1,591,870,767,000
I was wondering: Is it possible to create a checksum of a directory (using something like md5sum) Is it possible to recursively create checksum for each file inside the dir (and then print it out)? Or both? I'm using bash
md5sum won't take directory as input, however tar cf - FOO | md5sum will checksum it, if a file is change any place within FOO, checksum will change, but you won't have any hint of which file. The checksum will also change if any file metadata changes (permissions, timestamps, …). You might consider using : find FOO -type f -exec md5sum {} \; > FOO.md5 which will md5 every file individually, and save the result in FOO.md5. This makes it easier to check which file has changed. This variant only depends on file content, not on metadata.
Get checksum of directory on bash
1,591,870,767,000
I ran into a situation that I don't really understand. I have a bunch of backup files in a recursive structure of which I want to calculate the md5. When I add some additional file extensions the process exits (exit code 0) without yielding any output. find . -type f -iname "*.3gp" -o -iname "*.avi" -o -iname "*.mov" -o -iname "*.mp4" -print0 | xargs -0 md5sum find . -type f -iname "*.3gp" -o -iname "*.avi" -o -iname "*.mov" -o -iname "*.mp4" -o -iname "*.mpg" -print0 | xargs -0 md5sum The first one works fine, the second one doesn't yield any output. I've even tried it in a directory where there are no mpg files, same behavior. Is there a limit on the number of arguments to find? I am running OSX and I installed md5sum from Macports. Extra information There seems to be something odd with the pipe and I'm inclined to blame the filenames. Further investigation in another folder shows me that the find command seems to work and there are 129 video files, 1 of which is .mpg. When I try the find+md5sum it returns after only 1 file. I ran a similar command in other folder that only contains pictures and it worked fine (found 80k files, yield 80k hashes). Pictures@2006$ find . -type f -iname "*.3gp" -o -iname "*.avi" -o -iname "*.mov" -o -iname "*.mp4" | wc -l 128 Pictures@2006$ find . -type f -iname "*.3gp" -o -iname "*.avi" -o -iname "*.mov" -o -iname "*.mp4" -o -iname "*.mpg" | wc -l 129 Pictures@2006$ find . -type f -iname "*.3gp" -o -iname "*.avi" -o -iname "*.mov" -o -iname "*.mp4" -o -iname "*.mpg" -print0 | xargs -0 md5sum c21a78f2b2d5ca773b47647315ad91f8 ./pending photos/Video [%]/P007.MPG Pictures@2006$ I also noticed that the second filename to process contained punctuation, a plus sign and non-ascii characters. Is it possible that the error may be due to file naming? Is there any workaround? /Esplai/+Nou/20060604 Dinar d'últim dia d'esplai[Barbacoa al torrent de l'Escaiola]/MVI_7702.AVI
Let me first mention that -print0 is non standard and not the best solution. Better is to use "execplus", e.g. find dir -type f -exec cmd {} + Your main problem however is that the operators have precedence and your -print is "anded" with the last -name primary only. So the right method is to put the -o red primaries in parenthesis: find dir ( -name '*.x1' -o -name '*.x2' ) -exec cmd {} + You may of course add more -o -type operators if you need.
find and md5sum not yielding any output (find -o limit?)
1,591,870,767,000
I find it odd as I managed to get different md5sum using the exact same file from the same directory. The output as below: [root@testlabs Config]# ls Backup_Files hostname1-config.uac hostname2-config.uac hostname3-config.uac [root@testlabs Config]# ls hostname1-config.uac | md5sum 2a52f0eb11f6478a4f8aeee1c0ac90dd - [root@testlabs Config]# md5sum hostname1-config.uac d41d8cd98f00b204e9800998ecf8427e hostname1-config.uac May I know which is the correct way to get the correct md5sum result? Thank you. I did this to compare the MD5 of two files (original file and backup copy file). The naming convention of the original file is hostname1-config.uac, while the backup file is hostname1-201411071649.uac; but they are just copies (cp -p). First Method (Does not work) #!/bin/bash # ... # ls hostname1-config.uac | md5sum hostname1-config.uac > /tmp/md5sum.tmp ARCHIVE_DIR="/tmp/Archive" FULL_HOSTNAME=`/bin/sort -d /tmp/full_hostname.tmp` TIMESTAMP=`/bin/date +%Y%m%d%H%M -r $FULL_HOSTNAME` for HOSTNAME in `/bin/sort -d /tmp/hostname.tmp` do ls $ARCHIVE_DIR | grep -i --text $HOSTNAME-$TIMESTAMP.uac | md5sum -c /tmp/md5sum.tmp >> /tmp/md5sum2.tmp done Second Method (Worked perfectly in Command-line but not in script) In Command-Line [root@testlabs Config]# md5sum hostname1-config.uac ca3434263400ea2b4ffbc107ef729b8a hostname1-config.uac [root@testlabs Config]# md5sum hostname1-config.uac > md5.tmp [root@testlabs Config]# cd /tmp/Archive [root@testlabs Archive]# md5sum hostname1-config.uac ca3434263400ea2b4ffbc107ef729b8a hostname1-config.uac [root@testlabs Archive]# echo 'Tampered!' > hostname1-config.uac [root@testlabs Archive]# cat hostname1-config.uac | md5sum - c /Network_Backup/Config/md5.tmp hostname1-config.uac: FAILED md5sum: WARNING: 1 of 1 computed checksum did NOT match [root@testlabs Archive]# rm -f hostname1-config.uac [root@testlabs Archive]# cd /tmp/Config [root@testlabs Config]# cp -p hostname1-config.uac /tmp/Archive [root@testlabs Config]# cd /tmp/Archive [root@testlabs Archive]# cat hostname1-config.uac | md5sum -c /Network_Backup/Config/md5.tmp hostname1-config.uac: OK In Script #!/bin/bash # ... # CONFIG_DIR="/tmp/Config" ARCHIVE_DIR="/tmp/Archive" HOSTNAME=`/bin/sort -d /tmp/hostname.tmp` FULL_HOSTNAME=`/bin/sort -d /tmp/full_hostname.tmp` TIMESTAMP=`/bin/date +%Y%m%d%H%M -r $FULL_HOSTNAME` cd $CONFIG_DIR md5sum $FULL_HOSTNAME > /tmp/md5sum.tmp cd $ARCHIVE_DIR cat $HOSTNAME-$FILE_TIMESTAMP.uac | md5sum -c /tmp/md5sum.tmp >> /tmp/md5sum2.tmp The returned result in /tmp/md5sum2.tmp: hostname1-config.uac: FAILED open or read
The response FAILED open or read happens when the file specified in the md5 checksum file (md5sum.tmp in your case) does not exist. For example. [user@localhost tmp]$ cd /tmp/testfolder [user@localhost testfolder]$ touch dog [user@localhost testfolder]$ md5sum dog > /tmp/md5sum.tmp [user@localhost testfolder]$ md5sum -c /tmp/md5sum.tmp dog: OK [user@localhost testfolder]$ cd .. [user@localhost tmp]$ md5sum -c /tmp/md5sum.tmp md5sum: dog: No such file or directory dog: FAILED open or read md5sum: WARNING: 1 listed file could not be read Note that I believe the md5sum program does not look at the standard input when passed the -c option. It simply looks at the checksums in the file specified by the -c option. If they exist and the filename matches, than it compares it and all is happy. While there is probably a better way, storing the result of two separate md5sums in a variable and then comparing them with an if statement is probably the approach I would take. Kind of like this. #!/bin/bash firstfile=`cat dog | md5sum ` # alternately could have used firstfile=$(md5sum < dog ) to ovoid UUOC secondfile=`cat mouse | md5sum ` if [ "$firstfile" == "$secondfile" ]; then echo "They Match!" else echo "They Don't Match!" fi
Different MD5Sum for the same file in same directory
1,591,870,767,000
I'm using this command to rename files with random characters from sha1sum and move all files from subdirectories to the current directory: for fname in `find . -type f`; do mv "$fname" $(echo "$fname" | sha1sum | cut -f1 -d' ').html; done But the question is: Does it create unique filenames? I'm worried the generated name from sha1sum may not be unique (generated twice or more). If I run the above command, and then run another one in another directory, will it generate a unique file name for each file?
sha1sum outputs will be unique as long as inputs are unique. (Unless you are very extremely unlucky and you found some sha1sum collision). As for your use case: It's a good habit to use printf '%s' "$fname" instead of echo "$fname", the former will work when $fname is -n, or -e,… See also enzotib remark, I missed that at first glance. Also, I'm not sure exactly what are your motivations, but you may consider feeding sha1sum with file contents instead of filenames. This way, you would obtain a unique filename for each unique content.
Rename files with random characters from sha1sum. Will the names be unique?
1,591,870,767,000
In AIX system (v 7.1) sha1sum is calculating different hash codes when its piped directly to the output of tar compared to when it reads a file. What are the reasons for this ? Are there ways to workaround this and get the hash code directly from the tar piped output ? (In others systems like Debian and Ubuntu, piped sha1sum to tar output works well) folder=myfolder tarfile=myfolder.tar tar -cf $tarfile $folder && sha1sum $tarfile fe2dcba2b25d4bbb35460309c8bb87a1d2514d7d myfolder.tar tar -cf $tarfile $folder && sha1sum $tarfile fe2dcba2b25d4bbb35460309c8bb87a1d2514d7d myfolder.tar tar -cf - $folder > $tarfile && sha1sum $tarfile fe2dcba2b25d4bbb35460309c8bb87a1d2514d7d myfolder.tar tar -cf - $folder > $tarfile && sha1sum $tarfile fe2dcba2b25d4bbb35460309c8bb87a1d2514d7d myfolder.tar tar -cf - $folder | sha1sum f1dd1a0c4e82dd5c441664869b656c7bce799270 - tar -cf - $folder | sha1sum f1dd1a0c4e82dd5c441664869b656c7bce799270 -
The reason for that problem is the command tar. It has internal records made of a fixed number of 512 bytes blocks. The number of blocks per record can be set with the parameter -b. Some implementations can adjust the amount of blocks automatically according to the file descriptor, if its a tape device, a regular file, or a pipe. Fixing the amount of blocks with the -b parameter fixed the problem. Like as: tar -b1 -cf - $folder | sha1sum. But to match the default blocks predefined in the first two commands of the question I had to use -b20 (10240 byte records is the default for archives stored in regular files): $ tar -b20 -cf - $folder > $tarfile && sha1sum $tarfile fe2dcba2b25d4bbb35460309c8bb87a1d2514d7d myfolder.tar $ tar -b20 -cf - $folder |sha1sum fe2dcba2b25d4bbb35460309c8bb87a1d2514d7d
Different hash code when piping "sha1sum" to "tar" output
1,591,870,767,000
I'm learning about SHA1 (specifically wrt Git), and I wanted to sanity-check my understanding by calculating a string's SHA1 with different methods - I expected identical SHA1 hashes, but instead I got distinct results from three of four methods: >git hash-object --stdin <<< "Apple Pie" 23991897e13e47ed0adb91a0082c31c82fe0cbe5 . >sha1sum <<< "blob 9\0Apple Pie" 332cd56150dc8b954c0b859bd4aa6092beafa00f - . >printf 'blob 9\0Apple Pie' > foo.txt >sha1sum foo.txt 9eed377bbdeb4aa5d14f8df9cd50fed042f41023 foo.txt . >openssl sha1 foo.txt SHA1(foo.txt)= 9eed377bbdeb4aa5d14f8df9cd50fed042f41023 The accepted answer to this Stack Overflow question says that git hash-object runs a SHA1 hash on the specified content prefixed with "blob [file size]/0". Thus I explicitly prefixed that text to the strings I tested with the non-git method. Why all these different results? I thought SHA1 was a specific and unique hash of a given string, and that there were not different "types" of SHA1 - is that not true?
The differences don't come from SHA1, but the input. The here-string syntax appends a newline, as we can see with od: $ od -c <<< foo 0000000 f o o \n So in your git command the input is the ten characters Apple Pie\n. In addition, the double quotes you used in the here-strings don't support backslash escapes like \n or \nnn, so <<< "blob 9\0Apple Pie" gives a string containing a literal backslash and a zero. printf however does interpret \0 as the NUL byte, and it doesn't add a trailing newline, so with the newline added and the length fixed, we should get the expected output: $ printf 'blob 10\0Apple Pie\n' | sha1sum 23991897e13e47ed0adb91a0082c31c82fe0cbe5 - We could try to do the same with the here-string using the $'' quote which does support \0 as representing the NUL byte, but that may not work in all shells, since the NUL byte ends the string. E.g. Bash cannot deal with it, zsh can: $ zsh -c "sha1sum <<< $'blob 10\0Apple Pie'" 23991897e13e47ed0adb91a0082c31c82fe0cbe5 -
Different methods to get SHA1 give different results
1,591,870,767,000
I have a series of gzip files which I wish to store more efficiently using xz, without losing traceability to a set of checksums of the gzip files. I believe this amounts to being able to recreate the gzip files from the xz files, though I'm open to other suggestions. To elaborate... If I have a gzip file named target.txt.gz, and I decompress it to target.txt and discard the compressed file, I want to exactly recreate the original compressed file target.txt.gz. By exactly, I mean a cryptographic checksum of the file should indicate that it is exactly the same as the original. I initially thought this must be impossible, because a gzip file contains metadata such as original file name and timestamp, which might not be preserved upon decompression, and metadata such as a comment, the source operating system, and compression flags, which are almost certainly not preserved upon decompression. But then I thought to modify my question: is there a minimal amount of header information that I could extract from the gzip file that, in combination with the uncompressed data, would allow me to recreate the original gzip file. And then I thought that the answer might still be no due to the existence of tools such as Zopfli and 7-zip, which can create gzip-compatible streams which are better (therefore different) from the standard gzip program. As far as I am aware, the gzip file format does not record which of these compressors created it. So my question becomes: are there other options I haven't thought of that might mean I can achieve my goal as set out in the first paragraph after all?
This may be helpful: https://github.com/google/grittibanzli Grittibanzli is a tool to compress a deflate stream to a smaller file, which can be decoded to the original deflate stream again. That is, it compresses not only the data inside the deflate stream, but also the deflate-related information such as LZ77 symbols and Huffman trees, to reproduce a gzip, png, ... file exactly.
Can I recreate a gzip file exactly, given the original uncompressed file?
1,591,870,767,000
I'm trying to apply SHA256 and then Base64 encode a string inside a shell script. Got it working with PHP: php -r 'echo base64_encode(hash("sha256", "asdasd", false));'. But I'm trying to get rid of the PHP dependency. Got this line that works well in the terminal (using the fish shell): $ echo -n "asdasd" | shasum -a 256 | cut -d " " -f 1 | xxd -r -p | base64 X9kkYl9qsWoZzJgHx8UGrhgTSQ5LpnX4Q9WhDguqzbg= But when I put it inside a shell script, the result differs: $ cat foo.sh #!/bin/sh echo -n "asdasd" | shasum -a 256 | cut -d " " -f 1 | xxd -r -p | base64 $ ./foo.sh IzoDcfWvzNTZi62OfVm7DBfYrU9WiSdNyZIQhb7vZ0w= How can I make it produce expected result? My guess is that it's because of how binary strings are handled?
The problem is that you are using different shells. The echo command is a shell builtin for most shells and each implementation behaves differently. Now, you said your default shell is fish. So, when you run this command: ~> echo -n "asdasd" | shasum -a 256 | cut -d " " -f 1 | xxd -r -p | base64 X9kkYl9qsWoZzJgHx8UGrhgTSQ5LpnX4Q9WhDguqzbg= you will get the output shown above. This is because the echo of fish supports -n. Apparently, on your system, /bin/sh is a shell whose echo doesn't support -n. If the echo doesn't understand -n, what is actually being printed is -n asdasd\n. To illustrate, lets use printf to print exactly that: $ printf -- "-n asdasd\n" -n asdasd Now, if we pass that through your pipeline: $ printf -- "-n asdasd\n" | shasum -a 256 | cut -d " " -f 1 | xxd -r -p | base64 IzoDcfWvzNTZi62OfVm7DBfYrU9WiSdNyZIQhb7vZ0w= Thats the output you get from your script. So, what happens is that echo -n "asdasd" is actually printing the -n and a trailing newline. A simple solution is to use printf instead of echo: $ printf "asdasd" | shasum -a 256 | cut -d " " -f 1 | xxd -r -p | base64 X9kkYl9qsWoZzJgHx8UGrhgTSQ5LpnX4Q9WhDguqzbg= The above will work the same on the commandline and in your script and should do so with any shell you care to try. Yet another reason why printf is better than echo.
Apply SHA256 and Base64 to string in script
1,591,870,767,000
I would be glad if someone could explain the meaning of qemu-img check <filename> command. Man pages provide very poor information about it: check [-f fmt] [-r [leaks | all]] filename Perform a consistency check on the disk image filename. If "-r" is specified, qemu-img tries to repair any inconsistencies found during the check. "-r leaks" repairs only cluster leaks, whereas "-r all" fixes all kinds of errors, with a higher risk of choosing the wrong fix or hiding corruption that has already occurred. Only the formats "qcow2", "qed" and "vdi" support consistency checks. For instance, can I use this command in case of need to check the hash sum of the disk image (rather hash sums of the files stored on them)?
It is like a check disk on a physical file that constitute your virtual disk. See the part: Only the formats "qcow2", "qed" and "vdi" This mean that your virtual disk can be corrupted, wrong byte or missing data somewhere etc. Those formats seem to support some kind of checking and error correction, and that's the purpose of the option qemu-img check Wiki for qccow2 shows consistency features for example
What is qemu-img consistency check?
1,591,870,767,000
I started noticing some time ago in Xfce4 that when I sent some file to the Trash, tumbler (the Xfce4 thumbnailer) would cause very high I/O load for quite some time. Upon investigating the issue, I found that it was scanning the ~/.thumbnails directory, which was very large in size. So I decided to write a cron script that will periodically clean the ~/.thumbnails directory, but there is a certain directory of large video files that tumbler takes a bit of time, and sometimes even fails, to create thumbnails for. The idea is removing all thumbnails, except the ones for these videos. But in order to keep these thumbnails, I have to find what their names are. The problem is that thumbnails are stored named with a md5sum of the URI, plus the PNG extension. Upon looking at the tumbler source, I found the name for the thumbnail is generated in the following line: md5_hash = g_compute_checksum_for_string (G_CHECKSUM_MD5, uri, -1); The documentation for g-compute-checksum-for-string says: g_compute_checksum_for_string(GChecksumType checksum_type, const gchar *str, gssize length); checksum_type: a GChecksumType str: the string to compute the checksum of length: the length of the string, or -1 if the string is null-terminated. To put it short, the thumbnail for a file named /home/teresaejunior/File 01.png will be stored in the .thumbnails/ directory as a8502be3273541e618b840204479a7f9.png According to the ThumbnailerSpec, URI is file://filename. I did some research on the "null character", and thought \0 would do the trick. In order to achieve the result a8502be3273541e618b840204479a7f9, I believed the following should work: printf "file:///home/teresaejunior/File 01.png\0" | md5sum but that returns f507285c45d293fa66bc0b813d17b6e6 instead. Can someone give me some advice? I believe my printf line is flawed. What is my command doing different from g_compute_checksum_for_string?
The NUL character is not included when the MD5 is calculated. Rather, it's the space character that's causing your problem. The filename is URL-encoded: $ printf '%s' 'file:///home/teresaejunior/File%2001.png' | md5sum a8502be3273541e618b840204479a7f9 - Here's one way to do the conversion with Perl: $ perl -MURI::file -MDigest::MD5=md5_hex \ -e 'printf "%s.png\n", md5_hex(URI::file->new(shift))' \ '/home/teresaejunior/File 01.png' a8502be3273541e618b840204479a7f9.png
How to compute a thumbnail filename from the shell?
1,591,870,767,000
md5suming a file is fine, but I cannot seem to invoke md5sum to calculate the hash based only on the file's contents. (For those wondering, I'd prefer that multiple wgets of the same file return the same md5 hash. md5summing the file keeps reporting different hashes because the timestamp on the file is different.) julian@julian-computer:/tmp$ md5sum < `cat index.html` bash: `cat index.html`: ambiguous redirect
Md5sum and other hashing functions have nothing to do with the timestamp of the file for which the checksum is being calculated. Unless you have some weird version installed it processes only the contents of the file.
Running md5sum on a file's contents
1,591,870,767,000
I have problem on Unix, have to create a lot of passwords in smd5 but don't know how to generate smd5 passwords using some unix tool or perl in a form: {smd5}DoZgZ.OE$vSg4ZH7Bpy0BCdXNzBj001 I have never had any problems to generate anything on linux in md5/sha256/sha512 with tools like e.g. openssl I have tried with Perl using something like that: use Digest::MD5; use MIME::Base64; $ctx = Digest::MD5->new; $ctx->add('vwkfA17aF`'); $salt = 'DoZgZ.OE'; $ctx->add($salt); $hashedPasswd = '{smd5}' . encode_base64($ctx->digest . $salt ,''); print $hashedPasswd . "\n"; but unfortunatelly output is quite different : {smd5}5zJphaZULO3gnT1pwT1YHERvWmdaLk9F I do not see salt there like here {smd5}DoZgZ.OE$vSg4ZH7Bpy0BCdXNzBj001 and string is longer
AIX's {smd5} format is a non-standard one. It's a minor variant of the *BSD/Linux/Solaris “MD5” (which is the one generated by openssl passwd -1). I wasn't able to find much information about it. There is code contributed to John the Ripper (present in the 1.8.0 jumbo version) that calculates it, in the file aix_smd5_fmt_plug.c. From reading the code, it seems that the difference is that the BSD MD5 variant effectively prepends the string $1$ to the salt in one place, whereas the AIX variant doesn't. It shouldn't be very hard to patch OpenSSL to support this variant if you know C. You can change the algorithm used by AIX by editing /etc/security/pwdalg.cfg and add the line lpa_options = std_hash=true in the smd5: stanza. The normal way to do that is with the chsec command: chsec -f /etc/security/pwdalg.cfg -s md5 -a std_hash=true As far as I understand, this invalidates passwords recorded with the non-standard algorithm. Note that LDAP salted MD5 is not a good password hash, because it isn't slow. How to securely hash passwords? explains what that means. The BSD and AIX “MD5” algorithms are slow (to ideally slow, but way better than non-slow algorithms).
smd5 - generate by unix tool or perl
1,591,870,767,000
I want to replace all files in a target path with the same name as original.file AND the same hash as orignal.file with new.file. What's the command to do this? Say I have updated the contents of a file, and now I want all other copies of that file in a certain path to be updated as well. In most cases the following code would work: find /target_path/ -iname "original.file" -exec cp new.file '{}' However if original.file is readme.txt for example, many unrelated files would be overwritten.
This this will require a test to see if the checksums match before decide to run the cp, you will have to run a subshell as the -exec argument to find. This should do the job: find /target_path/ -iname "original.file" -exec bash -c \ '[[ $(md5sum "original.file") = $(md5sum "{}") ]] && cp "new.file" "{}"' \;
Replace all files with identical hash
1,591,870,767,000
The command ssh-keygen -lf /etc/ssh/ssh_host_rsa_key.pub prints the 128-bit fingerprint of the RSA key. What is the command to get the 160-bit fingerprint of a RSA key?
The key fingerprint is a hash of the key material. In a public key file, the key material is the second whitespace-separated field on the line, encoded in base64. The display format for the fingerprint depends on the hash that's being used. The 128-bit fingerprint uses MD5 and is displayed in hexadecimal. For example, the following commands display the same fingerprint, with different punctuation and surrounding material: ssh-keygen -f /etc/ssh/ssh_host_rsa_key.pub -l -E md5 </etc/ssh/ssh_host_rsa_key.pub awk '{print $2}' | base64 -d | md5sum The SHA256 fingerprint (256 bits) is displayed in Base64. Again, here are two commands to display the fingerprint. ssh-keygen -f /etc/ssh/ssh_host_rsa_key.pub -l -E sha256 </etc/ssh/ssh_host_rsa_key.pub awk '{print $2}' | base64 -d | openssl sha -sha256 -binary | base64 If you need a 160-bit fingerprint, it's using SHA-1, which was never commonly supported (I think SHA-1 wasn't introduced as an alternative to MD5 until a time when SHA-1 itself was deprecated). Current versions of OpenSSH don't support it, but you can use either of the alternative methods above with sha1 instead of md5 or sha256, depending on whether you need the hex or base64 format.
160-bit fingerprint of RSA key
1,591,870,767,000
I create a .tgz/tarball with $(npm pack). I then run: sha1sum oresoftware-npp-0.0.1001.tgz and I get: 77c58da68593dcdcd14bb16a37f5f63ef42bab63 oresoftware-npp-0.0.1001.tgz I want to compare that shasum against another tarball on a remote server. I can query for a shasum for a tarball on the NPM registry, with: npm view @oresoftware/npp@latest dist.shasum which yields: 3c2e7328110ba57e530c9938708b35bde941c419 this shasum is different than the other one above, but that's expected, since I changed the contents of the .tgz tarball file. my question is 3 fold: When I generated a sha1sum of the .tgz file resulting from npm pack, is that the right way to do it? To generate the sha1sum after the tar file is created? I assume that the the sha1sum would be identical if the tarballs had identical contents? Would they differ if the files were created/modified at different times even if they have otherwise the same contents? Is there a better way to check if two tarballs have the same contents? That's all I am trying to do.
The checksums available from the NPM registry provide two features: they allow you to verify that your download hasn’t been corrupted, and if you can verify the checksums out of band, that the downloaded files haven’t been altered. Unless NPM archives are built reproducibly, the checksums don’t allow you to verify that an archive you build yourself using npm pack contains what it’s supposed to. The issue with tarballs is that they contain metadata: the ownership, permissions and timestamps of the stored files, as stored by tar, and on top of that, compression metadata. If values for all of these are pre-agreed, they can be specified to override the values obtained from the file system, but that requires pre-agreement. To compare the contents of two arbitrary tarballs, the only reliable way is to extract their contents and compare that.
Compare tarballs with sha1sum
1,591,870,767,000
I'm trying to reverse engineering an IP camera firmware and found the complete ROM OS but I would like to find out the system password so I have looked at /etc/passwd. The file is not there, it is instead in /etc/default/passwd and here is its content: # cat passwd admin:hgZXuon0A2DxN:0:0:Administrator:/etc/config:/bin/sh viewer::1:1:Viewer:/:/dev/null So now I am searching for the shadow file and there is such file in the complete ROM? So I'm a bit confused here what is the encryption type used on this system? Btw I want to learn on how to do it not just lookup a password table (btw it would work on the web ui but not on telnet) and every tutorial seems to use this type of hash: root:$6$jcs.3tzd$aIZHimcDCgr6rhXaaHKYtogVYgrTak8I/EwpUSKrf8cbSczJ3E7TBqqPJN2Xb.8UgKbKyuaqb78bJ8lTWVEP7/:0:0:root:/root:/bin/bash Not the one I have
In that form (that is before /etc/shadow and without any $...$ prefix) it is probably (3)DES based hashing, see https://en.wikipedia.org/wiki/Crypt_%28C%29#Traditional_DES-based_scheme and the table above that paragraph: The original password encryption scheme was found to be too fast and thus subject to brute force enumeration of the most likely passwords.[10] In Seventh Edition Unix,[12] the scheme was changed to a modified form of the DES algorithm If you use this tool https://github.com/psypanda/hashID it says on your value: Analyzing 'hgZXuon0A2DxN' [+] DES(Unix) [+] Traditional DES [+] DEScrypt A brute forcing tool like hashcat should be able to find the original password based on that. It also tells you for your specific hash that the hash value is wrong (for this reason: https://hashcat.net/forum/thread-3809.html) in which case, if this is really a hash it is probably instead hgZXuon0A2DxM. Note an interesting "feature" of this kind of password storage (if it is truely ancient DES-based Unix storage): only the first 8 bytes (hence characters because then UTF-8 was unheard of) are taken into account, so that limits the space of possible values.
Reverse engineering IP camera firmware to find admin password
1,591,870,767,000
I went to the MX Linux website and their "Direct Download" linked me to their Sourceforge downloads page. I selected the first option - MX-19.2_September_x64.iso then checked the download using this utility. MX website lists this - Checksums and signatures of Final ISOs MX-19.2_386.iso md5sum: 6f5b12f9147bf457286e27196c501390 sha256: 187781c59394d086f347b00afe2f75e38e18a1044624998939c8403b40d4975e signature MX-19.2_x64.iso md5sum: a8f62099a9567e146108c51457183ad3 sha256: 7cf6d7dafe8200e7553f3548121eac077e87f891b5cdb939c0b677b9d7720e4c signature MX-19.2_ahs_x64.iso md5sum: 01435f705690c1bddfe3abf0921d0168 sha256: 20611c53c0015b1f2fc6eee4b6ef43a6738eeada50c66f7e66a5c88dd68fe763 signature MX-19.2_KDE_x64.iso md5sum: 065b1b9e1b798e776553778cebf48161 sha256: 0464b9ea35a3254eacc0a62eb64e36ba9a85591767cc6fd858f21a9617aedc66 signature The SHA256 (and MD5) I got after running the utility doesn't match the string given on MX's website. Am I testing it wrong or is the ISO not genuine?
You are doing it right! Here is what I got for MX-19.2_September_x64.iso from the "Snapshots" directory: $ md5sum MX-19.2_September_x64.iso e1c424823243b3b5371953b22bc2307c MX-19.2_September_x64.iso $ sha256sum MX-19.2_September_x64.iso 340ed960ac91e6c52f845278a4f2b751af7ce458f8ba126a3076564b5f8b1cef MX-19.2_September_x64.iso But the checksums on the website are not for MX-19.2_September_x64.iso, which is a monthly snapshot of the MX Linux. The checksums on the website are for MX-19.2_x64.iso from the "Final" directory: $ md5sum MX-19.2_x64.iso a8f62099a9567e146108c51457183ad3 MX-19.2_x64.iso $ sha256sum MX-19.2_x64.iso 7cf6d7dafe8200e7553f3548121eac077e87f891b5cdb939c0b677b9d7720e4c MX-19.2_x64.iso
MX Linux ISO checksum mismatch
1,591,870,767,000
I need to write websocket server on GAWK. And server must handle user's Sec-WebSocket-Key by following algorithm (from RFC): Concat key with 258EAFA5-E914-47DA-95CA-C5AB0DC85B11 Take sha-1 in binary form (should have 20 symbols) Take base64 from that binary form I'm trying to use openssl but the resulting code is incorrect: $ openssl sha1 -binary <<< 'a0+ZvgYqsMFHRerif0go8g==258EAFA5-E914-47DA-95CA-C5AB0DC85B11' | openssl base64 mYryklMdRrrLwrMQDeKEzOVMMWk= Meanwhile, the following PHP code generates valid output: $key = "a0+ZvgYqsMFHRerif0go8g=="; $hash = $key.'258EAFA5-E914-47DA-95CA-C5AB0DC85B11'; $hash = sha1($hash,true); $hash = base64_encode($hash); echo($hash); tswGtNOxrRhDmC04XQaDigMeaJA= What am I doing wrong?
The here string construct <<<… feeds the given string plus a newline as input to the command. So openssl sha1 -binary <<< 'a0+ZvgYqsMFHRerif0go8g==258EAFA5-E914-47DA-95CA-C5AB0DC85B11' is equivalent to printf '%s\n' 'a0+ZvgYqsMFHRerif0go8g==258EAFA5-E914-47DA-95CA-C5AB0DC85B11' | openssl sha1 -binary But your specification says “Concat key with 258EAFA5-E914-47DA-95CA-C5AB0DC85B11”, not “Concat[enate the] key with 258EAFA5-E914-47DA-95CA-C5AB0DC85B11 and a newline”. So you need printf '%s' 'a0+ZvgYqsMFHRerif0go8g==258EAFA5-E914-47DA-95CA-C5AB0DC85B11' | openssl sha1 -binary and this does produce tswGtNOxrRhDmC04XQaDigMeaJA=. echo -n … is equivalent to printf %s … except that there are shells that don't support echo -n or that expand backslash escapes in echo …, whereas printf %s … has a standard behavior (always print the argument literally).
Openssl generate invalid websocket security code
1,591,870,767,000
I'm trying to install docker with apt from the https://download.docker.com/linux/debian/ repo. Unfortunately installation always fails with a hash mismatch warning. I have already tried everything suggested in this post https://askubuntu.com/questions/41605/trouble-downloading-packages-list-due-to-a-hash-sum-mismatch-error without any success. The thing is when I compare the hash of the following error Fetched 21.4 MB in 17s (1,290 kB/s) E: Failed to fetch https://download.docker.com/linux/debian/dists/buster/pool/stable/amd64/containerd.io_1.2.13-2_amd64.deb Hash Sum mismatch Hashes of expected file: - SHA512:e0432c524abf9d915d42eab87c0a6cf4bd589cf2f250652253f98099c7961196f59ea0eb3f5683b05eafd969254e614739dc5681da0573b09a2eab64ab4efcfd - SHA256:71209f4a958d94639cba81ba3469d0aa9eff3da484106580959f5cf1fd116666 - SHA1:08fd3a4a4e82a1c0452c6bbd5803b19315c7e968 [weak] - MD5Sum:2ed3788e04a8a8787ea83b8b3a00152f [weak] - Filesize:21404482 [weak] Hashes of received file: - SHA512:24c80b4371056e0b7c34a7e9abde3ee62ecfb8ee5dbbb8db4a48104b16574749588ca00f71bc4d7c4fe148a9f706e86a9a2b4ac1f5f7955bfb316950f093de49 - SHA256:81753f427efcc308215d8a604e020743ab86ff2c45d67d86f35291d14550e203 - SHA1:316d0bbe37b28c9da64bfa263e2b31a3bbe7199e [weak] - MD5Sum:8dacbbff65f077d567ab1bff2cc2ad4b [weak] - Filesize:21404482 [weak] Last modification reported: Fri, 15 May 2020 03:23:56 +0000 E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? with the output of md5sum containerd.io_1.2.13-2_amd64.deb (file downloaded manually from https://download.docker.com/linux/debian/dists/buster/pool/stable/amd64/containerd.io_1.2.13-2_amd64.deb) I get the md5 hash 2ed3788e04a8a8787ea83b8b3a00152f containerd.io_1.2.13-2_amd64.deb which is exactly what apt claims is the expected value. So my only guess is that apt is either downloading something wrong or hashes the wrong way. This sound quite strange to me and I could not find any help searching for this. Here is my sources.list $ cat /etc/apt/sources.list # # deb cdrom:[Official Debian GNU/Linux Live 10.4.0 xfce 2020-05-09T10:59]/ buster main # deb cdrom:[Official Debian GNU/Linux Live 10.4.0 xfce 2020-05-09T10:59]/ buster main deb http://ftp.tu-clausthal.de/debian/ buster main deb-src http://ftp.tu-clausthal.de/debian/ buster main deb http://security.debian.org/debian-security buster/updates main deb-src http://security.debian.org/debian-security buster/updates main # buster-updates, previously known as 'volatile' deb http://ftp.tu-clausthal.de/debian/ buster-updates main deb-src http://ftp.tu-clausthal.de/debian/ buster-updates main # This system was installed using small removable media # (e.g. netinst, live or single CD). The matching "deb cdrom" # entries were disabled at the end of the installation process. # For information about how to configure apt package sources, # see the sources.list(5) manual. deb [arch=amd64] https://download.docker.com/linux/debian buster stable Debian is running in a virtual machine. Host system is Windows 10. Thanks for any advice!
Thanks @muru for referring to this post, it was indeed a problem with Windows' hyper-v and VirtualBox. Disabling the hyper-v acceleration solved the problem.
VirutalBox: apt download / hashing problem (APT Hash Sum Mismatch)
1,591,870,767,000
I have the following XML piece: <value id="1" creatorId="0" creationTime="1639487132" expirationTime="1639573532">+380554446363</value> <value id="1" creatorId="0" creationTime="1639487132" expirationTime="1639573532">+380554446364</value> <value id="1" creatorId="0" creationTime="1639487132" expirationTime="1639573532">+380554446365</value> I am trying to replace the <value> tag contents with their SHA-1 hashes using the following command: cat test.xml | sed "s/>[+]\([0-9][0-9]*\)<\/value>/>+$(echo \\1 | sha1sum | cut -f1 -d' ')<\/value>/g" It fails by replacing all found cases with the same incorrect value. Expected: <value id="1" creatorId="0" creationTime="1639487132" expirationTime="1639573532">34df370575e3528b31daef8633cb539119a3b028</value> <value id="1" creatorId="0" creationTime="1639487132" expirationTime="1639573532">d93767c769fd51bcf9eb25f95932559b24bae812</value> <value id="1" creatorId="0" creationTime="1639487132" expirationTime="1639573532">20338c1f048bed553b6cce76eaf1d388ba7686f5</value> Got: <value id="1" creatorId="0" creationTime="1639487132" expirationTime="1639573532">+cbcac786fef5abeb39fe473ab6abe554978a8156</value> <value id="1" creatorId="0" creationTime="1639487132" expirationTime="1639573532">+cbcac786fef5abeb39fe473ab6abe554978a8156</value> <value id="1" creatorId="0" creationTime="1639487132" expirationTime="1639573532">+cbcac786fef5abeb39fe473ab6abe554978a8156</value> What could I be doing wrong? TIA.
The sha1sum is evaluating the SHA-1 of the constant string "\\1" instead of the first SED regex match: $ echo \\1 | sha1sum cbcac786fef5abeb39fe473ab6abe554978a8156 - The shell performs all the various expansions (e.g. command substitutions) before executing the command (in this case, sed). Thus, shell expands cat test.xml | sed "s/>[+]\([0-9][0-9]*\)<\/value>/>+$(echo \\1 | sha1sum | cut -f1 -d' ')<\/value>/g" to cat test.xml | sed "s/>[+]\([0-9][0-9]*\)<\/value>/>+cbcac786fef5abeb39fe473ab6abe554978a8156<\/value>/g" It then runs two processes, one running cat test.xml and another running sed "s/>[+]\([0-9][0-9]*\)<\/value>/>+cbcac786fef5abeb39fe473ab6abe554978a8156<\/value>/g" with the STDOUT of the first process piped to the STDIN of the second process. In order for what you're trying to do to work, sed would have to be able to execute other executables from within sed. I don't believe sed can do that, so you'll have to do it some other way. You can do it using sed, e.g. this is one way for a in `cat test.xml | sed -E 's,^.*>(\+[0-9]+)<\/value>$,\1,'`; do echo "$a" | sha1sum | cut -f1 -d' '; done >2nd cat test.xml | sed -E 's,>\+[0-9]+</value>$,>,' >1st paste -d '' 1st 2nd | sed -E 's,$,</value>,' You've also not included the "+" in the brackets enclosing the first sed match string, from the SHA1 sums you've provided as the expected results, I believe you want the "+" included, so I've corrected that as well.
Problems with replacing XML tag contents using sed
1,591,870,767,000
I have a command that I run in a folder that outputs MD5 hashes and filenames on the terminal: ls |sort -nr | xargs md5sum I need this output in a text file that I can download and compare to another folder on another customer's machine. How can I modify the command such that its output is stored in a file in say /tmp? I'm using Redhat 5.
It's a bad idea to parse the output of ls. The primary job of ls is to list the attributes of files (size, date, etc.). The shell itself is perfectly capable of listing the contents of a directory, with wildcards. It's quite simple to run md5sum on all the files in the current directory and put the output in a file: redirect its output to the desired output file. md5sum * >/tmp/md5sums.txt If you want the output to be sorted by file name, pipe the output of md5sum into sort. md5sum * | sort -k 2 >/tmp/md5sums.txt Note that numeric sorting (-n) will only give useful results if the file names are purely numeric. If all you need is for the output to be deterministic, how you sort doesn't matter.
How do I redirect command output to a file?
1,591,870,767,000
Input text: test Online MD5 hashsum generator: 098f6bcd4621d373cade4e832627b4f6 echo "test" | md5sum: d8e8fca2dc0f896fd7cb4cb0031ba249 Also the same happens to sha512sum and sha1sum. Why does Linux and an online generator generate different hashes?
One of these is the hash of "test" and one of them is the hash of "test\n". $ printf 'test' | md5sum 098f6bcd4621d373cade4e832627b4f6 - $ printf 'test\n' | md5sum d8e8fca2dc0f896fd7cb4cb0031ba249 - echo outputs a newline character after its arguments.
Command line generates different hashsum than online hash generator… [duplicate]
1,591,870,767,000
Is there a way I could get the md5 hash of a file on a remote server? I'm looking for a command like md5 hlin117@server:/path/to/file.txt
AFAIK there is no remote md5. The closest you can get is to execute the command on the remote server: ssh hlin117@server md5sum /path/to/file.txt Obviously, md5sum must be installed on the remote server. Alternatively, get the file and do it locally: scp hlin117@server:/path/to/file.txt . md5sum file.txt rm file.txt Or, as @Cyrus pointed out: ssh hlin117@server cat /path/to/file.txt | md5sum
BASH: Getting the md5 hash of file on remote server
1,591,870,767,000
I am given a file containing the md5 values for files within the same folder. The information is in the file md5checksums.txt in the following format: b0da7ead9d82a3494d7e0a7099871ef4 ./GCF_000959505.1_ASM95950v1_assembly_report.txt 7ff32cbb16daf46c87b3546ad576ff66 ./GCF_000959505.1_ASM95950v1_assembly_stats.txt 034081da3aa0708f06c2ec1129e4aca9 ./GCF_000959505.1_ASM95950v1_cds_from_genomic.fna.gz I want to do md5 checks for all the files. I got this command: awk '{system("md5 "$2)}' md5checksums.txt But this just gets the md5 values MD5 (./GCF_000959505.1_ASM95950v1_assembly_report.txt) = b0da7ead9d82a3494d7e0a7099871ef4 MD5 (./GCF_000959505.1_ASM95950v1_assembly_stats.txt) = 7ff32cbb16daf46c87b3546ad576ff66 MD5 (./GCF_000959505.1_ASM95950v1_cds_from_genomic.fna.gz) = 3a30966523a36368ab432f666001f80a I would like to extract the calculated md5 against the first column of md5checksums.txt I thought I could do something like an awk inside an awk, but I can't get it to work: awk '{system("md5 "$2" | awk HERE EVALUATE RESULTS AND CHECK IF EQUAL TO $1")}' md5checksums.txt
I'm a bit confused as to why you involve awk in this. To verify the MD5 checksums in a file produced by GNU md5sum, you do md5sum -c file.txt Or, on an OpenBSD or NetBSD system whose md5 utility supports -c filename (not FreeBSD or macOS): md5 -c file.txt In your case, file.txt would be your md5checksums.txt file.
awk inside another awk's system
1,591,870,767,000
I want output like this: name size and hash: myfile.txt 222M 24f4ce42e0bc39ddf7b7e879a mynewfile.txt 353M a274613df45a94c3c67fe646 For name and size only I have ll -h | awk '{print $9,$10,$11,$12,$5}' But how can I get hash for every file? I tried: ll -h | awk '{print $9,$10,$11,$12,$5}' | md5sum But I get only one hash at all.
You should not parse ls, instead use this: for f in * .*; do [ -f "$f" ] && \ printf "%s %s %s\n" "$f" $(du -h -- "$f" | cut -f1) $(md5sum -- "$f" | cut -d' ' -f1) done The for loop runs trough all files and directories in the current directory. [ -f "$f" ] checks if it's a regular file printf "%s %s %s\n" prints the arguments in the desired format. "$f" the first argument is the filename du -h -- "$f" | cut -f1 the second is the size (human readable), but not the filename, cut cuts all excep the first field away md5sum -- "$f" | cut -d' ' -f1 third is the MD5 sum, but without the filename.
md5sum for every file (with ll)
1,652,101,680,000
I have a big log file containing a line as below example : {"data_1":210,"target_number":1096748811,"extra_data":66} {"data_1":0,"target_number":7130881445,"extra_data":56} {"data_1":1712,"target_number":1098334917,"extra_data":48} {"data_1":0,"target_number":3062674667,"extra_data":54} {"data_1":53,"target_number":5110609228,"extra_data":246} I want to replace target_number's value with evaluation of md5 value in whole file. I am trying the jq command with basic syntax as below : jq '.target_number|= "md5(\(.))"' input2.log Expected output is : {"data_1":210,"target_number":620e25e6f054992308c564cb883e4940,"extra_data":66} Current output is : {"data_1":210,"target_number":md5(1096748811),"extra_data":66}
jq does not have direct md5 calculation function like it has for base64. You need to use the shell's utilities for that. jq -c . input.log | while IFS= read -r obj; do md5sum=$( printf '%s' "$obj" | jq -j '.target_number' | md5sum | cut -d' ' -f1) jq -c --arg md5 "$md5sum" '.target_number = $md5' <<<"$obj" done > output.json Note that the generated hash from md5sum cannot be interpreted as a number in jq, but only as a string value. So it will be enclosed in quotes. Note that this approach is "expected" to be slower, because it involves invoking jq for each line of your input file separately and calculating the hash for the number.
How should I replace a value in JSON file with its md5 value using jq command?
1,652,101,680,000
I am trying to calculate a PBKDF2 hash, but am getting inconsistent results. Message: Hello Salt: 60C100D05C610E8B94A854DFC0789885 Iterations: 1 Key length: 16 Expected hash: 584519EF3E56714E301A4D85F972B6B4 nettle-pbkdf2 gives a951d3cd9014e0c0 527000727c1e928a https://asecuritysite.com/encryption/PBKDF2z and CryptoJS gives 584519EF3E56714E301A4D85F972B6B4 How can I use nettle-pbkdf2 or any other CLI program to generate the expected hash 584519EF3E56714E301A4D85F972B6B4? Reproduction steps below: nettle-pbkdf2 $ printf "Hello" | nettle-pbkdf2 --iterations=1 --length=16 --hex-salt 60C100D05C610E8B94A854DFC0789885 > a951d3cd9014e0c0 527000727c1e928a https://asecuritysite.com/encryption/PBKDF2z Message: Hello Salt: 60C100D05C610E8B94A854DFC0789885 Iterations: 1 Key length: 16 Hash: 584519EF3E56714E301A4D85F972B6B4 CryptoJS <script> /* CryptoJS v3.1.2 code.google.com/p/crypto-js (c) 2009-2013 by Jeff Mott. All rights reserved. code.google.com/p/crypto-js/wiki/License */ var CryptoJS=CryptoJS||function(g,j){var e={},d=e.lib={},m=function(){},n=d.Base={extend:function(a){m.prototype=this;var c=new m;a&&c.mixIn(a);c.hasOwnProperty("init")||(c.init=function(){c.$super.init.apply(this,arguments)});c.init.prototype=c;c.$super=this;return c},create:function(){var a=this.extend();a.init.apply(a,arguments);return a},init:function(){},mixIn:function(a){for(var c in a)a.hasOwnProperty(c)&&(this[c]=a[c]);a.hasOwnProperty("toString")&&(this.toString=a.toString)},clone:function(){return this.init.prototype.extend(this)}}, q=d.WordArray=n.extend({init:function(a,c){a=this.words=a||[];this.sigBytes=c!=j?c:4*a.length},toString:function(a){return(a||l).stringify(this)},concat:function(a){var c=this.words,p=a.words,f=this.sigBytes;a=a.sigBytes;this.clamp();if(f%4)for(var b=0;b<a;b++)c[f+b>>>2]|=(p[b>>>2]>>>24-8*(b%4)&255)<<24-8*((f+b)%4);else if(65535<p.length)for(b=0;b<a;b+=4)c[f+b>>>2]=p[b>>>2];else c.push.apply(c,p);this.sigBytes+=a;return this},clamp:function(){var a=this.words,c=this.sigBytes;a[c>>>2]&=4294967295<< 32-8*(c%4);a.length=g.ceil(c/4)},clone:function(){var a=n.clone.call(this);a.words=this.words.slice(0);return a},random:function(a){for(var c=[],b=0;b<a;b+=4)c.push(4294967296*g.random()|0);return new q.init(c,a)}}),b=e.enc={},l=b.Hex={stringify:function(a){var c=a.words;a=a.sigBytes;for(var b=[],f=0;f<a;f++){var d=c[f>>>2]>>>24-8*(f%4)&255;b.push((d>>>4).toString(16));b.push((d&15).toString(16))}return b.join("")},parse:function(a){for(var c=a.length,b=[],f=0;f<c;f+=2)b[f>>>3]|=parseInt(a.substr(f, 2),16)<<24-4*(f%8);return new q.init(b,c/2)}},k=b.Latin1={stringify:function(a){var c=a.words;a=a.sigBytes;for(var b=[],f=0;f<a;f++)b.push(String.fromCharCode(c[f>>>2]>>>24-8*(f%4)&255));return b.join("")},parse:function(a){for(var c=a.length,b=[],f=0;f<c;f++)b[f>>>2]|=(a.charCodeAt(f)&255)<<24-8*(f%4);return new q.init(b,c)}},h=b.Utf8={stringify:function(a){try{return decodeURIComponent(escape(k.stringify(a)))}catch(b){throw Error("Malformed UTF-8 data");}},parse:function(a){return k.parse(unescape(encodeURIComponent(a)))}}, u=d.BufferedBlockAlgorithm=n.extend({reset:function(){this._data=new q.init;this._nDataBytes=0},_append:function(a){"string"==typeof a&&(a=h.parse(a));this._data.concat(a);this._nDataBytes+=a.sigBytes},_process:function(a){var b=this._data,d=b.words,f=b.sigBytes,l=this.blockSize,e=f/(4*l),e=a?g.ceil(e):g.max((e|0)-this._minBufferSize,0);a=e*l;f=g.min(4*a,f);if(a){for(var h=0;h<a;h+=l)this._doProcessBlock(d,h);h=d.splice(0,a);b.sigBytes-=f}return new q.init(h,f)},clone:function(){var a=n.clone.call(this); a._data=this._data.clone();return a},_minBufferSize:0});d.Hasher=u.extend({cfg:n.extend(),init:function(a){this.cfg=this.cfg.extend(a);this.reset()},reset:function(){u.reset.call(this);this._doReset()},update:function(a){this._append(a);this._process();return this},finalize:function(a){a&&this._append(a);return this._doFinalize()},blockSize:16,_createHelper:function(a){return function(b,d){return(new a.init(d)).finalize(b)}},_createHmacHelper:function(a){return function(b,d){return(new w.HMAC.init(a, d)).finalize(b)}}});var w=e.algo={};return e}(Math); (function(){var g=CryptoJS,j=g.lib,e=j.WordArray,d=j.Hasher,m=[],j=g.algo.SHA1=d.extend({_doReset:function(){this._hash=new e.init([1732584193,4023233417,2562383102,271733878,3285377520])},_doProcessBlock:function(d,e){for(var b=this._hash.words,l=b[0],k=b[1],h=b[2],g=b[3],j=b[4],a=0;80>a;a++){if(16>a)m[a]=d[e+a]|0;else{var c=m[a-3]^m[a-8]^m[a-14]^m[a-16];m[a]=c<<1|c>>>31}c=(l<<5|l>>>27)+j+m[a];c=20>a?c+((k&h|~k&g)+1518500249):40>a?c+((k^h^g)+1859775393):60>a?c+((k&h|k&g|h&g)-1894007588):c+((k^h^ g)-899497514);j=g;g=h;h=k<<30|k>>>2;k=l;l=c}b[0]=b[0]+l|0;b[1]=b[1]+k|0;b[2]=b[2]+h|0;b[3]=b[3]+g|0;b[4]=b[4]+j|0},_doFinalize:function(){var d=this._data,e=d.words,b=8*this._nDataBytes,l=8*d.sigBytes;e[l>>>5]|=128<<24-l%32;e[(l+64>>>9<<4)+14]=Math.floor(b/4294967296);e[(l+64>>>9<<4)+15]=b;d.sigBytes=4*e.length;this._process();return this._hash},clone:function(){var e=d.clone.call(this);e._hash=this._hash.clone();return e}});g.SHA1=d._createHelper(j);g.HmacSHA1=d._createHmacHelper(j)})(); (function(){var g=CryptoJS,j=g.enc.Utf8;g.algo.HMAC=g.lib.Base.extend({init:function(e,d){e=this._hasher=new e.init;"string"==typeof d&&(d=j.parse(d));var g=e.blockSize,n=4*g;d.sigBytes>n&&(d=e.finalize(d));d.clamp();for(var q=this._oKey=d.clone(),b=this._iKey=d.clone(),l=q.words,k=b.words,h=0;h<g;h++)l[h]^=1549556828,k[h]^=909522486;q.sigBytes=b.sigBytes=n;this.reset()},reset:function(){var e=this._hasher;e.reset();e.update(this._iKey)},update:function(e){this._hasher.update(e);return this},finalize:function(e){var d= this._hasher;e=d.finalize(e);d.reset();return d.finalize(this._oKey.clone().concat(e))}})})(); (function(){var g=CryptoJS,j=g.lib,e=j.Base,d=j.WordArray,j=g.algo,m=j.HMAC,n=j.PBKDF2=e.extend({cfg:e.extend({keySize:4,hasher:j.SHA1,iterations:1}),init:function(d){this.cfg=this.cfg.extend(d)},compute:function(e,b){for(var g=this.cfg,k=m.create(g.hasher,e),h=d.create(),j=d.create([1]),n=h.words,a=j.words,c=g.keySize,g=g.iterations;n.length<c;){var p=k.update(b).finalize(j);k.reset();for(var f=p.words,v=f.length,s=p,t=1;t<g;t++){s=k.finalize(s);k.reset();for(var x=s.words,r=0;r<v;r++)f[r]^=x[r]}h.concat(p); a[0]++}h.sigBytes=4*c;return h}});g.PBKDF2=function(d,b,e){return n.create(e).compute(d,b)}})(); </script> <script> var salt = CryptoJS.enc.Hex.parse("60c100d05c610e8b94a854dfc0789885"); var message = "Hello"; var key128Bits = CryptoJS.PBKDF2(message, salt, { keySize: 4, iterations: 1 }); // Logs "584519ef3e56714e301a4d85f972b6b4" console.log(key128Bits.toString()); </script>
nettle-pbkdf2 documents it uses HMAC-SHA256 as its pseudo-random function; the other two are using HMAC-SHA1. Nettle has a PBKDF2-HMAC-SHA1 implementation, but I'm not sure if you can easily get it from the command line. (HMAC-SHA256 is generally a better choice if you have the option; SHA1 should be avoided). (Of course, you also shouldn't be using 1 iteration. I presume that's just for testing.)
PBKDF2 not the same
1,652,101,680,000
I recently came across sha1sum -c . As the manpage states - -c, --check read SHA1 sums from the FILEs and check them Now I know how to generate and use sha1sum from an .iso . For instance, $ sha1sum grml64-full_2014.11.iso 120bfa48b096691797a73fa2f464c7c71fac1587 grml64-full_2014.11.iso But if I try :- $ sha1sum -c grml64-full_2014.11.iso sha1sum: grml64-full_2014.11.iso: no properly formatted SHA1 checksum lines found I even tried :- $ cat sha1sum-grml 120bfa48b096691797a73fa2f464c7c71fac1587 As can be seen it is a single file which has the sha1sum. If I try the following :- $ sha1sum -c grml64-full_2014.11.iso sha1sum-grml sha1sum: grml64-full_2014.11.iso: no properly formatted SHA1 checksum lines found sha1sum: sha1sum-grml: no properly formatted SHA1 checksum lines found What I tried here was for sha1sum to generate and check the sha1sum with the checksum I have put in a file and compare between the two checksums or something. Maybe I have mis-understood something, maybe each file in the .iso needs to have its own checksum or something like that ? I looked up at both the man and the info. and became none the wiser. Look forward to understanding and a solution.
Generate the sha1sum file, sha1sum myfile >sums Then check with this file, sha1sum -c sums
creating and using hashsum in .iso image
1,652,101,680,000
Windows EXE files which are in the PE format have a header and it contains a checksum. Is it possible to verify it under Linux? Because I am looking for a Linux command I hope you understand that this is a Linux and not a Windows question (please don't close it).
There are a number of tools to do this; one such is pefile, a Python library with a build-in PE checksum verification function: #!/usr/bin/python3 import pefile import sys pe = pefile.PE(sys.argv[1]) if pe.verify_checksum(): print("PE checksum verified") else: print("PE checksum invalid") (error-handling left as an exercise for the reader). Save this as verifype, run chmod 755 verifype, then run it as ./verifype /path/to/pe.exe to check pe.exe’s checksum.
Is it possible to verify a Windows EXE (PE file format) checksum under Linux?
1,652,101,680,000
I want to make a shell script, that lets the user select a mounted device and calculate a checksum for the whole data on this device. I need the checksum to test if the device has been manipulated by somebody else. My approach to this was like the following: #!/bin/bash cd "${0%/*}" device=$(zenity --file-selection --directory \ --filename="/run/media/"${USER}"/" zenity --info \ --title "Info Message" \ --width 500 \ --height 150 \ --text "$(find "$device" -type f -exec md5sum {} \; | sort -k 2 | md5sum | cut -d ' ' -f 1)" My questions are: Is this the right approach? How can I add a progress bar between the selection of the device and the dialog box with the output of the calculation?
You can use pv which allows you to monitor the progress of data through a pipe. find "$device" -type f -exec md5sum {} \; | pv -ls $(find "$device" -type f |wc -l) | sort -k 2 | md5sum -s <size> provides the total size of the data. Since we want to show the progress according to the number of files, to need to know how many files are in the device, hence the $(find "$device" -type f |wc -l). -l - Instead of counting bytes, count lines (newline characters). The first find command will send the results of the md5sum to the pv command. pv will calculate the percentage done (the input lines divided by the total size provided by the -s flag), it will write the dialog the standard error of the terminal, and send it's input (the result of the first find command) to the next command after the pipe (in this case, sort). $ find "$device" -type f -exec md5sum {} \; | pv -ls $(find "$device" -type f |wc -l) | sort -k 2 | md5sum | cut -d ' ' -f 1 6.77k 0:00:10 [1.61k/s] [=========> ] 28% ETA 0:00:24 If there are too many files on the device, and counting the number of files might also take a long time, you can just use pv -l, in which case it will not show the completed percentage and the ETA, and the process bar would move left and right only to indicate that data is moving. $ find "$device" -type f -exec md5sum {} \; | pv -l | sort -k 2 | md5sum | cut -d ' ' -f 1 5.64k 0:00:09 [1.72k/s] [ <=> ] In order to show the progress bar in a zenity window, you can do something as follows: find "$device" -type f -exec md5sum {} \; \ | ( ( pv -nls $(find "$device" -type f |wc -l) 2>&1 1>&3 ) \ | zenity --progress --auto-close --text="md5sum progress..." 2>/dev/null ) 3>&1 \ | sort -k 2 | md5sum | cut -d ' ' -f 1 -n flag for pv - Instead of giving a visual indication of progress, pv will give an integer percentage, one per line, on standard error. 2>&1 will redirect the stderr of the pv command (the percent count) to the stdout - which is the stdin of the zenity command, so the latter could read the percent count and display the progress bar. 1>&3 redirect the old stdout (the output of the find ... md5sum command) to a a new file descriptor 3, to separate it from the stdin of the zenity process. 3>&1 the redirects file descriptor 3 of the pv process (containing the output of the find command) back to stdout, which is the stdin of the sort command to continue the analysis. And if you want to view the final md5sum output in another zenity window: zenity --info \ --title "Info Message" \ --width 500 \ --height 150 \ --text \ "$(find "$device" -type f -exec md5sum {} \; \ | ( ( pv -nls $(find "$device" -type f |wc -l) 2>&1 1>&3 ) \ | zenity --progress --auto-close --text="md5sum progress..." 2>/dev/null ) 3>&1 \ | sort -k 2 | md5sum | cut -d ' ' -f 1)"
Calculate checksum for whole content of a device and add a progress bar
1,652,101,680,000
I am get this same output after checking hashes of rmmod, modprobe, modinfo, modinfo, lsmod, insmod, depmod root@user:/var/log/apt# md5sum /sbin/modprobe 150aa565f1e37e2fd200523b6b4fcedf /sbin/modprobe root@user:/var/log/apt# md5sum /sbin/modinfo 150aa565f1e37e2fd200523b6b4fcedf /sbin/modinfo root@user:/var/log/apt# md5sum /sbin/lsmod 150aa565f1e37e2fd200523b6b4fcedf /sbin/lsmod root@user:/var/log/apt# md5sum /sbin/insmod 150aa565f1e37e2fd200523b6b4fcedf /sbin/insmod root@user:/var/log/apt# md5sum /sbin/depmod 150aa565f1e37e2fd200523b6b4fcedf /sbin/depmod rkhunter log: [22:41:02] Warning: The file properties have changed: [22:41:02] File: /bin/lsmod [22:41:02] Current hash: fcaa05d1888ba56f72194b80cab50de49b351354116adf1d2a578c6a3c626f44 [22:41:03] Stored hash : 31e9e2579309d2c68a812d63710cb8257601970bb73344b5ff454d362bde1695 [22:41:03] Current inode: 27304 Stored inode: 72 [22:41:03] Current file modification time: 1583955426 (11-Mar-2020 20:37:06) [22:41:03] Stored file modification time : 1578801885 (12-Jan-2020 05:04:45) [22:41:13] /bin/kmod [ Warning ] [22:41:13] Warning: The file properties have changed: [22:41:14] File: /bin/kmod [22:41:14] Current hash: fcaa05d1888ba56f72194b80cab50de49b351354116adf1d2a578c6a3c626f44 [22:41:14] Stored hash : 31e9e2579309d2c68a812d63710cb8257601970bb73344b5ff454d362bde1695 [22:41:14] Current inode: 11350 Stored inode: 60 [22:41:14] Current file modification time: 1583955426 (11-Mar-2020 20:37:06) [22:41:14] Stored file modification time : 1542059677 (12-Nov-2018 22:54:37) [22:40:48] Warning: The file properties have changed: [22:40:48] File: /sbin/rmmod [22:40:48] Current hash: fcaa05d1888ba56f72194b80cab50de49b351354116adf1d2a578c6a3c626f44 [22:40:48] Stored hash : 31e9e2579309d2c68a812d63710cb8257601970bb73344b5ff454d362bde1695 [22:40:48] Current inode: 27594 Stored inode: 11327 [22:40:48] Current file modification time: 1583955426 (11-Mar-2020 20:37:06) [22:40:48] Stored file modification time : 1578801890 (12-Jan-2020 05:04:50) [22:40:46] /sbin/modprobe [ Warning ] [22:40:46] Warning: The file properties have changed: [22:40:46] File: /sbin/modprobe [22:40:46] Current hash: fcaa05d1888ba56f72194b80cab50de49b351354116adf1d2a578c6a3c626f44 [22:40:46] Stored hash : 31e9e2579309d2c68a812d63710cb8257601970bb73344b5ff454d362bde1695 [22:40:46] Current inode: 27591 Stored inode: 11330 [22:40:46] Current file modification time: 1583955426 (11-Mar-2020 20:37:06) [22:40:46] Stored file modification time : 1578801890 (12-Jan-2020 05:04:50) [22:40:45] /sbin/modinfo [ Warning ] [22:40:45] Warning: The file properties have changed: [22:40:45] File: /sbin/modinfo [22:40:45] Current hash: fcaa05d1888ba56f72194b80cab50de49b351354116adf1d2a578c6a3c626f44 [22:40:45] Stored hash : 31e9e2579309d2c68a812d63710cb8257601970bb73344b5ff454d362bde1695 [22:40:45] Current inode: 27589 Stored inode: 11331 [22:40:45] Current file modification time: 1583955426 (11-Mar-2020 20:37:06) [22:40:45] Stored file modification time : 1578801890 (12-Jan-2020 05:04:50) [22:40:42] Warning: The file properties have changed: [22:40:42] File: /sbin/insmod [22:40:42] Current hash: fcaa05d1888ba56f72194b80cab50de49b351354116adf1d2a578c6a3c626f44 [22:40:42] Stored hash : 31e9e2579309d2c68a812d63710cb8257601970bb73344b5ff454d362bde1695 [22:40:42] Current inode: 27585 Stored inode: 11334 [22:40:42] Current file modification time: 1583955426 (11-Mar-2020 20:37:06) [22:40:42] Stored file modification time : 1578801890 (12-Jan-2020 05:04:50) apt log: root@user:/var/log/apt# cat /var/log/apt/history.log.1 | grep -n1 2020-03-11 21- 22:Start-Date: 2020-03-11 17:37:43 23-Commandline: apt upgrade -y 24-Upgrade: libsqlite3-0:amd64 (3.22.0-1ubuntu0.2, 3.22.0-1ubuntu0.3) 25:End-Date: 2020-03-11 17:37:43 26- ls -l output: root@user:~# ls -l /sbin/rmmod /sbin/modprobe /sbin/modinfo /sbin/modinfo /sbin/lsmod /sbin/insmod /sbin/depmod lrwxrwxrwx 1 root root 9 Mar 11 20:37 /sbin/depmod -> /bin/kmod lrwxrwxrwx 1 root root 9 Mar 11 20:37 /sbin/insmod -> /bin/kmod lrwxrwxrwx 1 root root 9 Mar 11 20:37 /sbin/lsmod -> /bin/kmod lrwxrwxrwx 1 root root 9 Mar 11 20:37 /sbin/modinfo -> /bin/kmod lrwxrwxrwx 1 root root 9 Mar 11 20:37 /sbin/modinfo -> /bin/kmod lrwxrwxrwx 1 root root 9 Mar 11 20:37 /sbin/modprobe -> /bin/kmod lrwxrwxrwx 1 root root 9 Mar 11 20:37 /sbin/rmmod -> /bin/kmod my operating system: Distributor ID: Ubuntu Description: Ubuntu 18.04.4 LTS Release: 18.04 Codename: bionic rkhunter log for kmod root@user:~# cat /var/log/rkhunter.log | grep -n10 kmod 419:[22:41:13] /bin/kmod [ Warning ] 420-[22:41:13] Warning: The file properties have changed: 421:[22:41:14] File: /bin/kmod 422-[22:41:14] Current hash: fcaa05d1888ba56f72194b80cab50de49b351354116adf1d2a578c6a3c626f44 423-[22:41:14] Stored hash : 31e9e2579309d2c68a812d63710cb8257601970bb73344b5ff454d362bde1695 424-[22:41:14] Current inode: 11350 Stored inode: 60 425-[22:41:14] Current file modification time: 1583955426 (11-Mar-2020 20:37:06) 426-[22:41:14] Stored file modification time : 1542059677 (12-Nov-2018 22:54:37) QUESTIONS Why i am get this results? Why hashes of commands are the same? I am asking this because this command are give different outputs. Does these results show that really I am hacked or possible rootkits are exists?
I see this on my Ubuntu system: $ ls -l /sbin/modprobe /sbin/modinfo /sbin/lsmod /sbin/insmod /sbin/depmod lrwxrwxrwx 1 root root 9 Mar 12 09:15 /sbin/depmod -> /bin/kmod lrwxrwxrwx 1 root root 9 Mar 12 09:15 /sbin/insmod -> /bin/kmod lrwxrwxrwx 1 root root 9 Mar 12 09:15 /sbin/lsmod -> /bin/kmod lrwxrwxrwx 1 root root 9 Mar 12 09:15 /sbin/modinfo -> /bin/kmod lrwxrwxrwx 1 root root 9 Mar 12 09:15 /sbin/modprobe -> /bin/kmod $ The hashes are all the same because those are all symlinks to the same file. Nothing to be concerned about; that's normal for these programs. And you almost certainly don't have a rootkit. As for why you don't see the update, it's because you don't understand how apt-get handles file modification times. Files installed by apt-get get their modification time from when the package was built, not from the time you installed it. If you check your log again, you'll almost certainly see an update to kmod; it will just be after the day you thought it would be on.
hashes are the same: rmmod, modprobe, modinfo, modinfo, lsmod, insmod, depmod
1,652,101,680,000
I have a very large file (200GB). Apparently when I transfer it over it did not copy correctly. The sha1 hash on both are different. Is there a way I can divide the file up to blocks (like 1MB or 64MB) and output a hash for each block? Then compare/fix? I might just write a quick app to do it.
That "quick app" already exists, and is relatively common: rsync. Of course, rsync will do a whole lot more than that, but what you want is fairly simple: rsync -cvP --inplace user@source:src-path-to-file dest-path-to-file # from the destination rsync -cvP --inplace src-path-to-file user@dest:dest-path-to-file # from the source That will by default use ssh (or maybe rsh, on a really old version) to make the connection and transfer the data. Other methods are possible, too. Options I passed are: -c — skip based on checksums, not file size/mtime. By default rsync optimizes and skips transfers where the size & mtime match. -c forces it to compute the checksum (which is an expensive operation, in terms of I/O). Note this is a block-based checksum (unless you tell it to do whole files only), and it'll only transfer the corrupted blocks. The block size is automatically chosen, but can be overridden with -B (I doubt there is any reason to). -v — verbose, will give some details (which file it's working on) -P — turns on both partial files (so if it gets halfway through, it won't throw out the work) and a progress bar. --inplace — Update the existing file, not a temporary file (which would then replace the original file). Saves you from having a 200GB temporary file. Also implies partial files, so that -P is partially redundant. BTW: I'm not sure how you did the original transfer, but if it was sftp/scp, then something is very wrong—those fully protect from any corruption on the network. You really ought to track down the cause. Defective RAM is a relatively common one.
Hash a file by 64MB blocks?
1,652,101,680,000
I'm comparing two USB devices post-rsync with md5sum /usb1/* /usb2/* | sort such that all the files, which are at the root of the drives, have their md5 sums calculated, then the output is sorted by md5sum. The entire command expands to md5sum /usb1/bigfile1 /usb1/bigfile2 /usb2/bigfile1 /usb2/bigfile2, for example. My impatience has me wondering what the progress is, i.e. which file is it working on right now? Because it's piped into another command, I have no stdout or stderr to see which file it's hashing atm. Is there some sort of command to report on what another command is doing right now? I have this tagged with "signals", but I don't know much about signals to be comfortable interrupting the md5sum process with one. I recall it was possible to send a signal to dd to make it report progress.
You can use progress for this: progress -p $(pgrep md5sum) or, if you want to continuously monitor md5sum: progress -m -p $(pgrep md5sum) Without using an external tool, you can see what files md5sum is currently accessing on Linux by listing /proc/$(pgrep md5sum)/fd, and find out more information about the file descriptors (including their position, which shows how much md5sum has processed) by looking at the files in /proc/$(pgrep md5sum)/fdinfo. As you mention, dd will print out a progress report when it receives SIGUSR1, but that’s a feature implemented by dd and not a general signal-mediated feature. By default sending SIGUSR1 to a process will kill it.
md5sum progress when piped
1,652,101,680,000
I have a file.bin and file.bin.sha. The file.bin.sha has 32 bytes and contains binary data. How can I verify the checksum? sha256sum complains about no properly formatted SHA256 checksum lines found.
Convert the 256-bit binary value to its hex ascii representation, and append the filename to create a check file that sha256sum will like: echo $(od -An -tx1 file.bin.sha | tr -d '\n ') file.bin > my256 sha256sum -c my256 od - octal (binary, hex) dump of file -An - suppress addresses -tx1 - print as one byte values, hex tr -d '\n ' - suppress blanks and newlines in the output
How to verify binary SHA checksum
1,652,101,680,000
I am trying to write a script and it uses the SHA of a date but I am getting two different results and for the life of me can't figure out why. echo -n 03112016 | cut -d'.' -f4 | sha256sum | cut -d' ' -f 1 482c00f7db8419d9f9a151d54de301d73c8f688b2e3e91c485f369596543612e date "+%m%d%Y" | tr -d '\n' | sha256sum | cut -d' ' -f 1 d373ab72ec7d92ee06ebba4748f78829cd62ce68f1ac600ae1767a272869b664 I know it has to be something stupid on my end but I really appreciate any help.
Shell utilities that are designed to operate on text (such as cat, cut, sort, tail, etc.) require their input to be text files. A text file, in Unix terms: consists only of valid characters in the ambient locale (LC_CTYPE locale setting), other than the null byte; consists of a sequence of lines, each of which is terminated by a newline character (\n, a.k.a. line feed). That second point implies that any non-empty file ends with a newline character. What happens if the input is not a text file depends on the utility. Old Unix systems tended to ignore text on a line after a null byte, and to ignore all or part of the last incomplete line (text after the last newline character). GNU versions always treat null bytes as an ordinary character and mostly pass through invalid byte sequences. GNU versions always process the whole input even if the final newline is missing, but they differ in whether they add a trailing newline in their output. For example, GNU cat always passes its input through unchanged, but many others, including cut, always print a newline at the end of each output line including the last one. So when you produce the reference input, you need to suppress the trailing newline at the last minute. echo 03112016 | cut -d'.' -f4 | tr -d '\n' | sha256sum or just echo -n 03112016 | sha256sum
Why don't the SHA's match?
1,652,101,680,000
I'm having some difficulty using md5sum to verify some copied files. I have two directories: dir1 and dir2. In dir1 there are five files: file1, file2, file3, file4 and file5. dir2 is empty. If I do: cp dir1/* dir2, then: md5sum dir1/* > checksums, then: md5sum -c checksums, the result is: dir1/file1: OK dir1/file2: OK dir1/file3: OK dir1/file4: OK dir1/file5: OK But this is no good. I want it to compare the checksums in the text file with the checksums of the copied files in dir2.
Try: $ (cd dir1 && md5sum *) > checksums $ cd dir2 $ md5sum -c ../checksums checksums's content would look like: d41d8cd98f00b204e9800998ecf8427e file1 ................................ file2 ................................ file3 ................................ file4 ................................ file5
Difficulty using 'md5sum -c'
1,652,101,680,000
I am running my below shell script which gets the md5sum of a files in my PRIMARY directory #!/bin/bash export PRIMARY=/data01/primary for entry in "$PRIMARY"/* do local_md5sum=/usr/bin/md5sum "$entry" | awk '{print $1}' echo local_md5sum done As soon as I run above shell script and try to print out the md5sum value of my files, I always get - ./md5checksum_check_1.sh: line 7: /test01/prime/pp_monthly_1980_58_200003_5.data: Permission denied But if I try to run the below command as it is on the console then it works fine - /usr/bin/md5sum /test01/prime/pp_monthly_1980_58_200003_5.data | awk '{print $1}' I am not sure why? Is there anything wrong I am doing?
You're missing some syntax on this line: local_md5sum=/usr/bin/md5sum "$entry" | awk '{print $1}' You need local_md5sum=$(/usr/bin/md5sum "$entry" | awk '{print $1}') Without the $(), you are trying to execute $entry as a command.
Permission denied while getting the md5sum of a file using shell script?
1,652,101,680,000
I am using sha256sum to check whether file has changed or not within the bash script. My idea is to first store the sha256sum in a *.sha256 file. Then if this is present then use this for sha256 comparison using --check command. If hashes match then continue the rest of script otherwise create new hash file (*.sha256) and replace the older one with new hash file. I have done: x="/home/test.json" if [[ -s $x.sha256 ]]; then sha256sum --check $x.sha256 #exit 1 s1=$(sha256sum "$x" > "$x.sha256") #exit 1 else s1=$(sha256sum "$x" > "$x.sha256") echo "sha256 file is created" fi But from above code initially if x.sha256 file is not present, then it is created. But if file is already available and hashes doesn't match then it throws error: /home/test.json: FAILED sha256sum: WARNING: 1 computed checksum did NOT match This is expected but in this case I want to create new x.sha256 file and replace old file. Can anyone please let me know what changes is needed? Thanks in advance
From the comment: But the script uses set -Eeo pipefail set -e causes the whole script to fail if and when sha256sum --check exits with a falsy status. To avoid that, put the command in any sort of conditional, e.g. run sha256sum --check || true instead. Also, as mentioned, s1=$(sha256sum "$x" > "$x.sha256") looks a bit off, as the output from sha256sum is redirected to a file, there's nothing for the command substitution to catch, and $s1 ends up empty regardless of what happens. If you don't need s1, just drop the command substitution, and if you want to put the output of sha256sum in both the variable and the file, use something like s1=$(sha256sum "$x" | tee "$x.sha256") So, I'd rewrite the whole script as something like this: #!/bin/bash set -e file="/home/test.json" if [[ -s $file.sha256 ]]; then if ! sha256sum --check "$file.sha256"; then echo "stored hash for '$file' does not match existing file, storing new hash" sha256sum "$file" > "$file.sha256" else : # hash file exists and matches existing file, do nothing fi else echo "no stored hash for '$file', creating hash file" sha256sum "$file" > "$file.sha256" fi That is, unless you need a copy of the hash further in the file, in which case the branch for an existing matching hash also needs to read "$file.sha256" in. But if you're using set -e, see BashFAQ/105 -- Why doesn't set -e (or set -o errexit, or trap ERR) do what I expected?
How to compare hashes of files in bash script?
1,652,101,680,000
I am trying to count occurrence of each word on particular line of input. Given example (what I am trying to achieve): $./foo.pl asd fgh asd iop zxc asd: 1, 2 fgh: 1 iop: 2 zxc: 3 Just a program to record, on which line a word occurred. This script: #!/usr/bin/perl -w while(<>){ ++$line_num; @words = split $_; for my $w(@words){ push @h{$w}, $line_num; } } for my $k(keys %h){ print "$k:\t"; print "@h{$k}\n"; } gives error: Experimental push on scalar is now forbidden But the @h{$w} which is hash, where key is word(string) and value is array, is not scalar. So why is this error?
As noted in Rakesh Sharma's comment, the syntax for accessing an anonymous array as an element of a hash is @{ $h{$w} }. So for example: #!/usr/bin/perl -w while(<>){ for my $w (split) { push @{ $h{$w} }, $.; } } for my $k (keys %h) { print "$k:\t", "@{ $h{$k} }\n"; } See for example Hash of Arrays in Perl
How to implement string to array hash in perl?
1,652,101,680,000
I'm having a problem checking certain .md5 files, they are all files that are in directories which have been renamed since the files were downloaded. [User1 Directory X]$ md5sum -c file1.txt.md5 md5sum: directoryx/file1.txt: No such file or directory directoryx/file1.txt: FAILED open or read md5sum: WARNING: 1 listed file could not be read I noticed the difference in the name of the directory that I am in vs the directory md5sum is looking in. The directory was either renamed (not by me!) since the files were downloaded, or the individual files were downloaded to this directory rather than the entire directory being downloaded at once. I edited the directory name to match but this didn't solve the issue. [User1 directoryx]$ md5sum -c file1.txt.md5 md5sum: directoryx/file1.txt: No such file or directory directoryx/file1.txt: FAILED open or read md5sum: WARNING: 1 listed file could not be read Any help on how to fix this?
It seems (from your prompt) that the file is located in the correct directoryx, but since md5sum will try to read the file at the path given by the .md5 file, and since you are in directoryx, it won't find it. Move one level up in the directory hierarchy and use $ md5sum -c directoryx/file1.txt.md5
md5sum failed to open a file, directory issue