date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,652,101,680,000 |
I've this script that creates a mysql dump of a database and sends it to a storage server. But I see that, sometimes, the generated files is of wrong size, even sending the file with rsync.
I'd like to test the remote file for md5sum and if hash is the same, the local file is removed. If the hash is different, though, the rsync is re-attempted.
The original script is:
#!/bin/bash
# database credentials
DATABASEHOST=<host>
DATABASEUSER=<user>
DATABASEPASSWORD=<password>
DATABASESCHEMA=<schema>
DATABASEENV=<env>
# Local directory of mysqldump file
LOCALDIR=<localdir>
# Temporary directory for compressed file
TEMPDIR=<tempdir>
# Remote Directory for backups.
REMOTEDIR=<remote-dir>
# USERname to login as
BACKUPUSER=<backupuser>
# Backup host to login to
BACKUPHOST=<backuphost>
# mysqldump file
MYSQLDUMPFILE="$(date +%Y%m%d)"_bkp_"$DATABASESCHEMA".sql
# compressed file
COMPRESSEDFILE="$(date +%Y%m%d)"_"$DATABASEENV"_"$DATABASESCHEMA".tar.gz
#--- end config
echo $(date +%H:%M)
echo "Creating the MySQL dump"
mysqldump --host="$DATABASEHOST" --user="$DATABASEUSER" --password="$DATABASEPASSWORD" --single-transaction "$DATABASESCHEMA" > "$LOCALDIR"/"$MYSQLDUMPFILE"
#echo "Generating md5sum"
md5sum "$LOCALDIR"/* > "$LOCALDIR"/checklist.chk
#echo "Compressing the dump and checklist"
tar -cvzf "$TEMPDIR"/$(date +%Y%m%d)"_"$DATABASEENV"_"$DATABASESCHEMA".tar.gz" "$LOCALDIR"/*
#echo "Sending the compressed file to storage location"
rsync -azvh "$TEMPDIR"/"$COMPRESSEDFILE" "$BACKUPHOST":"$REMOTEDIR"
echo "Removing generated files"
rm "$LOCALDIR"/checklist.chk > /dev/null 2>&1
rm "$LOCALDIR"/"$MYSQLDUMPFILE" > /dev/null 2>&1
rm "$TEMPDIR"/"$COMPRESSEDFILE" > /dev/null 2>&1
echo $(date +%H:%M)
|
rsync knows when a file is incomplete.
Just run rsync regularly, and it will by itself take care to re-send new parts of the file as needed.
It could happen that $TEMPDIR is too small to contain the tar czvf ? then you would send that (incomplete) file with rsync?
why not simplify:
dump the DB as you did
then cd "$LOCALDIR" && rsync -azvh *_bkp_*.sql "$BACKUPHOST":"$REMOTEDIR"
| how to make a loop with md5sum test on a bash script? |
1,652,101,680,000 |
For the encrypted base64 encoded SHAX strings, what command can decrypt it back to original string, thanks
|
From the linked post, your original string was generated by a method such as
echo -n foo | openssl dgst -binary -sha1 | openssl base64
What this generates is a digest, with SHA1 being the method of calculating the digest.
In this situation there is insufficient data to reconstruct the original string. This digest is a checksum of the original string and can be used for validation; to verify a message hasn't been tampered with.
So if you have a file xyzzy that contains your message you can run
cat xyzzy | openssl dgst -binary -sha1 | openssl base64
If the result is the same string as you started with then you can be confident it hasn't been modified.
The best you can do is remove the base64 part to get the binary digest:
echo $base64string | openssl base64 -d
but this is not the original message, just the checksum. The original message is not reconstructable from the digest.
| How can I decrypt back a base64 encoded shaX binary string? |
1,652,101,680,000 |
I know that dpkg keeps md5sums of configuration files for each package installed, so it can tell whether they are changed or not when upgrading.
Does it keep md5sums for regular (non-configuration) files as well?
|
Yes, look at the contents of /var/lib/dpkg/info/*.md5sums. Strictly speaking this isn't handled by dpkg; these checksums are generated at build-time (typically by dh_md5sums) and included in the binary packages.
You can check that the installed files still match their MD5 checksums using the debsums command.
| Debian package's files authentication |
1,652,101,680,000 |
Consider a folder with many XML files (10K small text files). Some XML files are identical, some are different.
I would like to find out what files are identical (ignoring whitespaces, tabs and linebreaks) and record the files in each cluster somehow.
I don't need high precision on this, so I thought one way of doing this would be with MD5 or any other hashing algorirhm, i.e. count the number of files with the same exact MD5 sum, but I would need to pre-remove spaces.
I'm in OS X and can check the MD5 of a file as follows:
$ md5 file_XYZ.xml
MD5 (file_XYZ.xml) = 0de0c7bea1a75434934c3821dcba759a
How can I use this to cluster identical files? (either a text file with filenames with the same hash, or clustering files in folders would do it)
|
You could create a "normalized" version of each XML file with something like:
xmllint --nospace --format orginal.xml > normalized.xml
That would get rid of "unimportant"-to-XML whitespace, indent consistently and so forth. After that, you could use cksum to find identical normalized files.
I'll suggest a script:
for ORIGXML in *.xml
do
xmllint --noblank --format "$ORIGXML" > "normalized.$ORIGXML"
cksum "normalized.$ORIGXML" | sed 's/^normalized\.//' >> files.list
done
sort -k1.1 files.list > sorted.files
I'm not sure I'd bother with an MD5 checksum. You're looking for duplicates, not doing cryptography with evil adversaries opposing you.
If you're looking for "nearly identical" XML files, you could maybe use Normalized Compression Distance to see how "far apart" the files are from each other. More simply, you could gzip or bzip2 the XML files, then sort based on compressed file size. The closer the compressed file size, the more identical the XML files will be.
| Clustering identical files ignoring spaces & linebreaks |
1,652,101,680,000 |
Why the difference in the following?
$ echo -n "foo" | openssl dgst -sha1 -hmac "key"
(stdin)= 9fc254126c2b1b7f106abacae0cb77e73411fad7
$ echo -n "foo" | sha1sum
0beec7b5ea3f0fdbc95d0dd47f3c5bc275da8a33 -
|
The -hmac "key" is what does it. Adding a HMAC is sort of like salting the data. It's not quite the same but you're changing how the hash is calculated. Thus you end up with a different result.
| sha1sum reporting different hash value relative to openssl |
1,652,101,680,000 |
I'm using this right now:
time md5sum -c *.txt | pv | grep -v ': OK$'
but aren't there any smart solutions that can list that how many files haven't been checked? I mean I have many md5sum's in .txt files in a dir, and I need to check them, but it would be a good thing to know that how much files are left to scan..
p.s.: so it's not really a progress bar, just a counter that output how many files are there left to "md5sum -c FILENAME"..
|
You could pass the names to md5sum one by one.
n=$(cat *.txt | wc -l)
cat *.txt | {
i=0 bad=0
while IFS= read -r line; do
i=$((i+1))
echo "Checking file $i/$n: $line"
echo "$line" | md5sum -c - || bad=$((bad+1))
done
[ $bad -eq 0 ] || { echo "$bad bad checksums"; false; }
}
Or, for casual use, you can run the simple command and check which file md5sum is up to by seeing what it has open.
lsof -p1234
# note the file name
cat *.txt | grep -n FILENAME
| Good "progress bar" about md5sum checking progress? |
1,652,101,680,000 |
Doing here some file listing with find command as follows:
find /dir1/ -type f -exec md5sum {} + | sort -k 2 > dir1.txt
Then:
find /dir2/ -type f -exec md5sum {} + | sort -k 2 > dir2.txt
Noticed that were some equal hashes despite being different files, for example, an xxxxxxxx.jpg image file with same hash of an yyyyyyyy.mp3 sound file.
Main question is, which is the confiability level os a md5sum file comparison?
|
The collision probability of md5sum is 1 in 264. Refer this post on crypto.se for more details.
SideNote: The contents of the file is hashed, filename doesn't play any role in hashing. Are you sure the files are differnent and not just the names?
$ md5 /tmp/files.txt*
MD5 (/tmp/files.txt) = 29fbedcb8a908b34ebfa7e48394999d2
MD5 (/tmp/files.txt.clone) = 29fbedcb8a908b34ebfa7e48394999d2
| Acuracy Level of md5sum Comparison |
1,652,101,680,000 |
I am uploading a file via sftp and just performing the safety test if the file is completely uploaded to the remote server. I am taking md5 hash of both(file in local and file in remote server) and matching them. If they match I conclude that the upload was successful. Here is the part of code from shell script.
ssh $REMOTE_MC 'digest -a md5 $TARGET_DIR/$filename' > $HOME_DIR/remote_hash_$datetag.txt
local_hash=$(md5sum $HOME_DIR/$dump | cut -d' ' -f1)
echo "local = $local_hash"
cat $HOME_DIR/remote_hash_$datetag.txt
remote_hash=$(cat $HOME_DIR/remote_hash_$datetag.txt)
echo "remote = $remote_hash"
output:
local = cd8d77f0467754bc0c1c7ac3fb7f6184
dee4a8484f99c577fd70cb8fa01e5995
remote = dee4a8484f99c577fd70cb8fa01e5995
The problem I am facing is that,when i run the script, hashes differ but if I run the command
ssh $REMOTE_MC 'digest -a md5 $TARGET_DIR/$filename' > $HOME_DIR/remote_hash_$datetag.txt
out of the shell script, i get the same hash. What am i doing wrong in the script?
Things I tried:
ssh $REMOTE_MC 'digest -a md5 $TARGET_DIR/$filename >> $TARGET_DIR/remote.txt'
I redirected the output to a remote file instead of local file, the remote file contains the correct hash. But if i redirect it to local file, the hash mismatches.
Thanks in advance.
|
You are using single quotes surrounding the ssh command. This way the variables $TARGET_DIR and $filename are taken literally instead of being evaluated. Change the single quotes to double quotes to have the variables evaluated:
ssh $REMOTE_MC "digest -a md5 $TARGET_DIR/$filename" > $HOME_DIR/remote_hash_$datetag.txt
Another suggestion is to write the variables as ${variable} like ${datetag} to make the variable name boundaries more clear.
| Executing a command in remote machine and redirecting output to a local file |
1,652,101,680,000 |
I have a file like this :
1 Record|1111|ABC
2 text in between for record 1
3 text in between for record 1
4 Record|2222|XYZ
5 text in between for record 2
6 Record|3333|XYZ
7 text in between for record 3
8 .
I want to read this file and generate something like
<Record_number> | <start line> | <number of lines> | md5sum(content)
That is:
1111|1|2|md5sum(Record|1111|ABC\ntext in between for record 1\ntext in between for record 1)
2222|4|1|md5sum(Record|2222|XYZ\ntext in between for record 2\n)
etc.
Currently, I am doing this using a two step process:
Step 1:
grep -n -C 0 "Record|" ../test.txt | awk -F[':|'] '{print $3"|"$1}'
will create
1111|1
2222|4
3333|6
Step 2:
Read this file line by line and generate md5sum and number of lines through script.
The issue it this two step processing is taking more processing time and the file size is huge (~4GB).
Is there a better way to do this?
|
Based on Costas' answer.
1) Create a file parse.awk, with the following content :
/^Record/ {
if (s>0) {
printf ("%s|%s|", r,l)
system("echo '"line"' | md5sum - | awk '{print $1}' ");
}
s=1;
r=$2;
c=1;
l=NR;
line="$0";
}
!/^Record/ {
line=line"\n""$0";
c+=1
}
END {
printf ("%s|%s|", r,l)
system("echo '"line"' | md5sum - | awk '{print $1}' ");
}
See Costas' explanations.
This script just do
printf the start of the resulting line (rather than print, that puts a newline)
system(echo $line | md5sum) to print the md5 - and a newline
2) Run awk -F"|" -f parse.awk myfile
3) Enjoy the result :
1111|1|cb36533781d8dd00011a85b0db9b87b3
2222|4|521331bb249e8a668afa2199fa8d289a
3333|6|6c2564464187094e9db3159d26ade2a5
| Read file and find all occurrences and generate hash for the content between the occurrences |
1,652,101,680,000 |
We are working with an older software product that has some limited programming capabilities - specifically, no bit manipulation functions. This has created a significant problem as we need to implement a HMAC-MD5 Hash to interface with an industry standard software interface.
The older software does have the ability to call a C program/dll and pass that information and obtain a returned value. We have no experience with the C language or how to set up the shared libraries required.
So specifically:
How should we set up and install a simplistic C compiler environment?
Where can we find an existing open source C implementation of the HMAC-MD5 algorithm?
If this does not exist, where can we find a resource to implement this algorithm?
Our environment is Unix CENTOS 4.9 and Apache/1.3.42
|
The compiler should be gcc, If you don't have it, the package has the same name—install it however you normally install CentOS package (e.g., yum install gcc)
There are a lot of open source implementations of HMAC-MD5. Any crypto library will have it. And various other projects have one. Google or a code search will quickly turn up thousands.
In fact, HMAC is defined in RFC2104, which includes example C code in the appendix. (You'll need to grab the MD5 example code from RFC 1321 as well).
| C-library HMAC_MD5 questions |
1,652,101,680,000 |
I am creating an image of an sd-card partition (dd) and eventually the checksum (md5sum) of the image and the partition are not the same.
What am i doing wrong?
My sd card is inserted into an external reader but not mounted.
sudo fdisk -l
Device Boot Start End Sectors Size Id Type
/dev/sdc3 30644224 250347519 219703296 104.8G b W95 FAT32
Creating the image:
sudo dd if=/dev/sdc3 of=/home/pi/part3.img bs=8M
Creating checksums:
sudo md5sum /dev/sdc3
sudo md5sum /home/pi/part3.img
|
Your SD is dead or dying (see more below), you will need to replace it. I’ve included some advice on what to use instead:
Using a SD card for a Raspberry Pi will drastically reduce their life expectancy. There are designed for 15 years of camera use, but with a RPi you will be writing logs tens if not hundreds of times a day. Helium miners are an example of using a RPi system where writes are common. They they use eMMC instead of SD cards, or 64 GB when they are forced. Most of the 64 GB is unused and there for more endurance.
If you have used this for a year, without changing the way the operating system writes to the disk, then your SD card has prematurely failed due to increased use.
I'd recommend replacing with a 64 GB SD card, and implement as many of these recommendations as you can.
A dead or dying SSD or Hard Disk can be a result of bit rot, defects in the controller firmware/microcode of the disk, or your hard drive controller in the system. There are even utilities designed to repetitively read the same block until the read succeeds.
| md5sum of image and sd-card partition differ |
1,652,101,680,000 |
I have written this shell script to test sha-516 hash password string :
myhash='$6$nxIRLUXhRQlj$t29nGt1moX3KcuFZmRwUjdiS9pcLWpqKhAY0Y0bp2pqs3fPrnVAXKKbLfyZcvkkcwcbr2Abc8sBZBXI9UaguU.' #Which is created by mkpasswd for test
i=0
while [[ 1 -eq 1 ]]
do
testpass=$(mkpasswd -m sha-512 "test")
i=$[ $i + 1 ]
if [[ "$testpass" == "$myhash" ]];
then
echo -e "found\n"
break
else
echo -e "$myhash /= $testpass :-> $i Testing....\n"
fi
done
After running 216107 numbers loop test I never found match.But in case of my linux OS(Ubuntu) system make so quickly match sign in credentials.My question is Why do I not get the same so quickly?
|
A password hash (like what you put in myhash) contains some metadata indicating which hash function is used and with what parameters such as cost, a salt, and the output of the hash function. In the modern Unix password hash format, the parts are separated by $:
6 indicates the Unix iterated SHA-512 method (a design that is similar, but not identical, to PBKDF2).
There are no parameters, so the cost factor is the default value.
nxIRLUXhRQlj is the salt.
t29nGt1moX3KcuFZmRwUjdiS9pcLWpqKhAY0Y0bp2pqs3fPrnVAXKKbLfyZcvkkcwcbr2Abc8sBZBXI9UaguU. is the expected output.
Each time you run mkpasswd -m sha-512, it creates a password hash with a random salt. So each run of this command produces a different output.
When you type your password, the system calculates iterated_sha512(default_cost, "nxIRLUXhRQlj", typed_password) and checks whether the output is "t29nGt1moX3KcuFZmRwUjdiS9pcLWpqKhAY0Y0bp2pqs3fPrnVAXKKbLfyZcvkkcwcbr2Abc8sBZBXI9UaguU.".
What your program is doing is different: you generate a random salt, then compare it plus the output of the password hashing function to myhash. This only matches if you've happened to generate the same salt, which has a negligible probability (your computer isn't going to generate the same salt twice in your lifetime). But you don't need to guess the salt: it's right there in the password hash.
Recommended reading: How to securely hash passwords?
| How does my linux OS make so quickly sign in process? |
1,652,101,680,000 |
I want to create a hash of more than one source in Bash.
I am aware that I can:
echo -n "STRING" | sha256sum
or
sha256sum [FILE]
What I need is:
STRING + FILE
FILE + FILE
STRING + STRING
STRING + FILE + STRING
For example STRING + FILE
Save the hash of STRING in a variable and the hash of the [FILE] in a variable. Compute and create a hash of the sum.
Save the hash of the STRING in a file and the hash of the [FILE] in the same file and create a hash of this file.
Can I create a hash using a single command?
For example: echo "STRING" + [FILE] | sha256sum
How can I accomplish this, and what is the recommended or correct method?
UPDATE
With Romeo Ninov's answer, EXAMPLE 1:
echo -n "STRING" && cat [FILE] | sha256sum
When I do:
EXAMPLE 2:
echo $(echo -n "STRING" | sha256sum) $(sha256sum [FILE]) | sha256sum
What should I use? I'm getting different results. What is the correct method to achieve this?
|
You could create a script like this to hash multiple files, and then hash the concatenation of their hashes. Hashing in two parts like this instead of concatenating all data first should work to prevent mixups where the concatenation loses information on the borders between the inputs (e.g. ab+c != a+bc).
#!/bin/bash
# function to get the hashes
H() {
sha256sum "$@" |
LC_ALL=C sed '
s/[[:blank:]].*//; # retain only the hash
s/^\\//; # remove a leading \ that GNU sha256sum at least
# inserts for file names where it escapes some
# characters (such as CR, LF or backslash).'
}
# workaround for command substitution removing final newlines
hashes=$(H "$@"; echo .)
hashes=${hashes%.}
# just for clarity
printf "%s\n" "----"
printf "%s" "$hashes"
printf "%s\n" "----"
# hash the hashes
final=$(printf "%s" "$hashes" | H)
echo "final hash of $# files: $final"
An example with two files:
$ echo hello > hello.txt
$ echo world > world.txt
$ bash hash.sh hello.txt world.txt
----
5891b5b522d5df086d0ff0b110fbd9d21bb4fc7163af34d08286a2e846f6be03
e258d248fda94c63753607f7c4494ee0fcbe92f1a76bfdac795c9d84101eb317
----
final hash of 2 files: 27201be8016b0793d29d23cb0b1f3dd0c92783eaf5aa7174322c95ebe23f9fe8
You could also use process substitution to insert a string instead, this should give the same output:
$ bash hash.sh hello.txt <(echo world)
[...]
final hash of 2 files: 27201be8016b0793d29d23cb0b1f3dd0c92783eaf5aa7174322c95ebe23f9fe8
Giving the same input data (hello\nworld\n) with a different separation gives a different hash:
$ bash hash.sh <(printf h) <(printf "ello\nworld\n")
[...]
final hash of 2 files: 0453f1e6ba45c89bf085b77f3ebb862a4dbfa5c91932eb077f9a554a2327eb8f
Of course, changing the order of the input files should also change the hash.
The part between the dashes in the output is just for clarity here, it shows the data that goes to the final sha256sum. You should probably remove it for actual use.
Above, I used sed to remove the filename(s) from the output of sha256sum. If you remove the | sed ... part, the filenames will be included and e.g. hash.sh hello.txt world.txt would instead hash the string
5891b5b522d5df086d0ff0b110fbd9d21bb4fc7163af34d08286a2e846f6be03 hello.txt
e258d248fda94c63753607f7c4494ee0fcbe92f1a76bfdac795c9d84101eb317 world.txt
The sub-hashes are the same, but the input to the final hash is different,
giving f27b5175dec88c76dc6a7b368167cd18875da266216506e10c503a56befd7e14 as the result. Obviously, changing the filenames, including going from hello.txt to ./hello.txt would change the hash. Also using process substitution would be less useful here, as they'd show up with odd implementation-dependent filenames (like /dev/fd/63 with Bash on Linux).
In the above, the input to the final hash is the hex encoding of the hashes of the input elements, with newlines terminating each. I don't think you need more separation than that, and could technically even drop the newlines as the hashes have a fixed length anyway (but we get the newlines for free and they make it easier to read for a human).
Though note that sha256sum gives just plain hashes. If you're looking for something to generate authentication tags, you should probably look into HMAC or such, and be wary of length-extension attacks (which a straightforward H(key + data) may be vulnerable to) etc.
Depending on your use-case, you might want to consider going to security.SE or crypto.SE, or hiring an actual expert.
| How can I create a hash or sha256sum in Bash using multiple sources or inputs? What is the recommended method? |
1,421,872,935,000 |
I have found out, how I can convert plain text into an SHA (http://hash.online-convert.com/sha512-generator), but how can I convert a SHA key to plain text?
|
SHA-1, SHA-256, SHA-512 and all the other SHA functions are cryptographic hash functions. One of the defining properties of cryptographic hash functions is preimage resistance: given a cryptographic hash function F and a value h, it is infeasible to find a text m such that F(m) = h. Note that hashing is not encryption: with encryption, you can find the original if you find the decryption key, but with hashing, you can't find the original except by guessing, period.
If you have the hash of a text, the only ways to find the text are:
Make an exhaustive search. If you take all the computers existing today and devote them to this task, this will take about 100 quintillion times the age of the universe for SHA-1, and much, much longer for SHA-512. Bring a book.
Make a fundamental breakthrough in cryptography. This is theoretically possible in that nobody has been able to prove that any of the SHA-* family are actually cryptographic hash functions, we just believe they are because professional cryptographers have tried to break them for years and failed. Publish your technique, you'll be famous.
Guess the text. It's easy to verify each guess. Be prepared to go through a lot of wrong guesses. Depending on the length and complexity (more precisely, on the entropy introduced by the method used to generate the text), this may range anywhere between quick (e.g. if you know it's a dictionary word) and infeasible (e.g. if it's a string of 50 random letters).
Figure out what input was passed to the function by non-computational means, such as finding the person who submitted the text and hitting them with a wrench until they reveal the password, or digging through the server logs (if the text was logged somewhere).
| How to convert SHA to plain text? [closed] |
1,421,872,935,000 |
I am running a script that copy a file from one location to other. In script I am calculating MD5sum using below command of original file and copied file and they are different:
echo -n "file" | md5sum
How come a same file have different MD5sum? Does copy command change something in Linux?
I have also checked checksum using cksum filename and it is also coming different.
|
echo -n "file" | md5sum
You are not calculating the checksum of the file, but of the filename. It is probably different because you are using two different paths (echo -n "/old/path/to/file" | md5sum vs. echo -n "/new/path/to/file" | md5sum).
To calculate the md5sum of the file, use this command:
md5sum file
| different checksum of original file and copied file |
1,421,872,935,000 |
I use OS X and have several checksum files that are generated from different external harddisks.
If the checksum files are in the same location as the files to check then I can simply run eg.:
shasum -c sums.sha1
But in my case sums.sha1 is located in ~/Desktop/sums.sha1 and the files to verify are in /Volumes/fr-ubb-1 (external drive, read only).
I understand that it's not possible to pass a location parameter to shasum.
What's the best practice to run the verification of my checksum file with files in a different location?
|
Run it from the directory containing the files to check, and give it the full path to the checksum file:
cd /Volumes/fr-ubb-1
shasum -c ~/Desktop/sums.sha1
This works with most (perhaps all) checksum verification tools, not just shasum.
| How to verify checksums if files to check are mounted on different location? |
1,421,872,935,000 |
What is the output of date -u +%W$(uname)|sha256sum|sed 's/\W//g' (on Arch Linux if it matters)?
How do I find that out?
|
date -u %W
Displays the current week of the year.
uname
Displays the kernel name.
sha256sum
Generates a SHA-256 Hash Sum.
sed 's/\W//g'
Cuts out all non-word characters.
The |'s are redirecting the output of the first command to the appending command.
Enter the line in a terminal, f.e. gnome-terminal or xterm:
date -u +%W$(uname)|sha256sum|sed 's/\W//g'
Depending on the date and the operating system installed, this will output different hashes, like this:
2aa4cb287b8a9314116f43b5e86d892d76a9589559aa69ed382e8f5dc493d955
| Shell output question |
1,421,872,935,000 |
We want to find all the .jar files with their chksum.
find . -name "*.jar"
./lib/ant-1.8.0.jar
./lib/ant-launcher-1.8.0.jar
./lib/backport-util-concurrent-3.1.jar
./lib/classworlds-1.1-alpha-2.jar
./lib/commons-codec-1.6.jar
./lib/commons-io-2.2.jar
./lib/commons-logging-1.1.1.jar
./lib/jline-0.9.94.jar
expected output
find . -name "*.jar"
ant-1.8.0.jar 325235345 4564
ant-launcher-1.8.0.jar 3523535 5453
.
.
.
Is it possible to add to find the command with sum, and print all .jar files with the relevant sum?
|
You can use the exec action of find to do this:
find . -name "*.jar" -exec cksum {} \+
The exec action runs the cksum command on each result of find. The + operator specifies that multiple results from find are passed to a single execution of cksum.
Do note that the order of columns is slightly different from your question. This is governed by the output of the cksum command, which outputs the information as [checksum] [byte count] [filename].
| find the relevant files with their checksum |
1,421,872,935,000 |
I am trying to setup postfix to relay all mail generated on the local machine via SMTP to a mailgun relay. I have used the mailgun relay before with success on an ubuntu server, but I am migrating to a Centos 7 server which I will be running in FIPS mode. There error log is below, slightly sanitized. I have a small enough network that I choose to have each machine reach out to mailgun individually (this the loopback-only, 127.0.0.0/8 restrictions) and no firewall open port allowing smtp in to the machine.
I assume the FIPS mode (and with it disabling of MD5) is causing problems, but I don't know how to overcome it or if it is even possible for tls_fprint to use some supported hash such as sha256 or sha512. However, the relay=none is slightly concerning since I have relayhost set, but perhaps that is because the smtp process is failing?
Any help would be appreciated!
postconf -n:
alias_database = hash:/etc/aliases
alias_maps = hash:/etc/aliases
command_directory = /usr/sbin
config_directory = /etc/postfix
daemon_directory = /usr/libexec/postfix
data_directory = /var/lib/postfix
debug_peer_level = 2
debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin ddd $daemon_directory/$process_name $process_id & sleep 5
html_directory = no
inet_interfaces = loopback-only
inet_protocols = ipv4
local_recipient_maps =
mail_owner = postfix
mailq_path = /usr/bin/mailq.postfix
manpage_directory = /usr/share/man
mydestination =
mynetworks = 127.0.0.0/8
newaliases_path = /usr/bin/newaliases.postfix
queue_directory = /var/spool/postfix
readme_directory = /usr/share/doc/postfix-2.10.1/README_FILES
relayhost = [smtp.mailgun.org]:587
sample_directory = /usr/share/doc/postfix-2.10.1/samples
sender_canonical_classes = envelope_sender, header_sender
sender_canonical_maps = regexp:/etc/postfix/sender_canonical_maps
sendmail_path = /usr/sbin/sendmail.postfix
setgid_group = postdrop
smtp_generic_maps = hash:/etc/postfix/generic
smtp_header_checks = regexp:/etc/postfix/header_check
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_sasl_security_options = noanonymous
smtp_tls_mandatory_ciphers = high
smtp_tls_note_starttls_offer = yes
smtp_tls_security_level = encrypt
smtpd_tls_security_level = encrypt
unknown_local_recipient_reject_code = 550
/var/log/maillog:
Apr 28 20:04:15 HOSTNAME postfix/pickup[85556]: SOME_ID_NUMBER: uid=0 from=<root>
Apr 28 20:04:15 HOSTNAME postfix/cleanup[85583]: SOME_ID_NUMBER: message-id=<20180429000415.SOME_ID_NUMBER@FQDN>
Apr 28 20:04:15 HOSTNAME postfix/qmgr[85557]: SOME_ID_NUMBER: from=<root@FQDN>, size=2261, nrcpt=1 (queue active)
Apr 28 20:04:16 HOSTNAME postfix/smtp[85585]: fatal: tls_fprint: error computing md5 message digest
Apr 28 20:04:17 HOSTNAME postfix/qmgr[85557]: warning: private/smtp socket: malformed response
Apr 28 20:04:17 HOSTNAME postfix/qmgr[85557]: warning: transport smtp failure -- see a previous warning/fatal/panic logfile record for the problem description
Apr 28 20:04:17 HOSTNAME postfix/master[85555]: warning: process /usr/libexec/postfix/smtp pid 85585 exit status 1
Apr 28 20:04:17 HOSTNAME postfix/master[85555]: warning: /usr/libexec/postfix/smtp: bad command startup -- throttling
Apr 28 20:04:17 HOSTNAME postfix/error[85587]: SOME_ID_NUMBER: to=<[email protected]>, relay=none, delay=1.7, delays=0.05/1.6/0/0.02, dsn=4.3.0, status=deferred (unknown mail transport error)
|
After many more hours of trying to figure it out, including turning up debug on the smtp and tlsmgr processes in master.cf, I was able to determine that the FIPS disabling md5 was indeed the issue. Adding the following to master.cf fixed the issue:
smtp_tls_fingerprint_digest=sha256
Setting to sha1 and sha512 also worked. Note that the postfix documentation warns about setting to anything other than sha1 or md5 (md5 being the default). From the documentation:
While additional digest algorithms are often available with OpenSSL's libcrypto, only those used by libssl in SSL cipher suites are available to Postfix. For now this means just md5 or sha1.
However, for my needs sha256 seems to be working just fine.
| Postfix configuration issue with fips on centos 7; mailgun relay |
1,421,872,935,000 |
I've been having some troubles with my newly installed Ubuntu system (random freezing) and I wanted to verify that the iso I received was not corrupted by checking the sha256 hashes (I know I should have done that first). I followed the instruction on the Ubuntu website but I keep getting the same warning, shown below:
user@user-System-Product-Name:~/Downloads/ubuntu_isos$ sha256sum -c SHA256SUMS
sha256sum: ubuntu-16.04-desktop-amd64.iso: No such file or directory
ubuntu-16.04-desktop-amd64.iso: FAILED open or read
sha256sum: ubuntu-16.04-desktop-i386.iso: No such file or directory
ubuntu-16.04-desktop-i386.iso: FAILED open or read
sha256sum: ubuntu-16.04-server-amd64.img: No such file or directory
ubuntu-16.04-server-amd64.img: FAILED open or read
sha256sum: ubuntu-16.04-server-amd64.iso: No such file or directory
ubuntu-16.04-server-amd64.iso: FAILED open or read
sha256sum: ubuntu-16.04-server-i386.img: No such file or directory
ubuntu-16.04-server-i386.img: FAILED open or read
sha256sum: ubuntu-16.04-server-i386.iso: No such file or directory
ubuntu-16.04-server-i386.iso: FAILED open or read
ubuntu-16.04.1-desktop-amd64.iso: OK
sha256sum: ubuntu-16.04.1-desktop-i386.iso: No such file or directory
ubuntu-16.04.1-desktop-i386.iso: FAILED open or read
sha256sum: ubuntu-16.04.1-server-amd64.img: No such file or directory
ubuntu-16.04.1-server-amd64.img: FAILED open or read
sha256sum: ubuntu-16.04.1-server-amd64.iso: No such file or directory
ubuntu-16.04.1-server-amd64.iso: FAILED open or read
sha256sum: ubuntu-16.04.1-server-i386.img: No such file or directory
ubuntu-16.04.1-server-i386.img: FAILED open or read
sha256sum: ubuntu-16.04.1-server-i386.iso: No such file or directory
ubuntu-16.04.1-server-i386.iso: FAILED open or read
sha256sum: WARNING: 11 listed files could not be read
To let you know what I've done, I downloaded the Ubuntu desktop iso and the SHA256SUMS/SHA256SUMS.gpg files from http://releases.ubuntu.com/xenial/ into the same directory. I then ran the command sha256sum -c SHA256SUMS from this directory to get the output. I have downloaded 5 different images and they all give this same exact output, that does not seem correct to me. I did exactly as instructed on the Ubuntu website, but I must be doing something wrong, right?
|
The important line here is this one:
ubuntu-16.04.1-desktop-amd64.iso: OK
Your ISO is OK.
The checksum file contains sums for all the images; sha256sum warns you about the ones you haven't downloaded, because it can't verify them.
| ubuntu iso sha256 checksum |
1,421,872,935,000 |
When I try to verify the integrity of git-man-pages package I downloaded from "http://code.google.com/p/git-core/downloads/detail?name=git-manpages-1.8.4.tar.gz&can=2&q=" it fails with error.
Command which i ran: md5sum -c git-manpages-1.8.4.tar.gz
Error displayed:
md5sum: git-manpages-1.8.4.tar.gz: no properly formatted MD5 checksum lines found
I also tried entering the checksum value of git-manpages that i found in site in a file called checksum in the following format
8c67a7bc442d6191bc17633c7f2846c71bda71cf git-manpages-1.8.4.tar.gz
and then running the
Command: md5sum -c checksum
Error displayed:
md5sum: checksum: no properly formatted MD5 checksum lines found
|
If you just want to compute the checksum of the file you downloaded you should leave the -c out. Apologies if I didn't understand your question right. For example:
$ md5sum git-manpages-1.8.4.tar.gz
e3720f56e18a5ab8ee1871ac9c72ca7c git-manpages-1.8.4.tar.gz
md5sum also expects 2 spaces between checksum and file name in files to be used with -c, just like in the output above.
| md5sum check fails for git-man-pages.tar.gz package |
1,421,872,935,000 |
I'd like to be able to detect when a file is considered different if the data are still the same but the ownership has changed (or the permission, access/creation time, etc...).
Is there a such tool ?
|
As stated in the answer provided by Detect changes in permissions, you can determine the time of permission changes and the actual permissions by using the stat command.
So something like that should work:
stat -c "%a %Z" file | cat - file | sha1sum
| Generate file digest using data, file ownership & permission and others |
1,421,872,935,000 |
Is time complexity of Linux SHA1 linear? That means that 2GB file is hashed twice longer than 1GB file?
|
Yes. There's no sensible way to implement SHA-1, SHA2, SHA3 or any other cryptographic hash function in anything other than linear time. It's impossible to have sub-linear time since the output depends on every bit of the input, and a linear time implementation is straightforward so there's no reason for an implementation to take more than linear time.
Common hash functions are not parallelizable, but even if they were, this wouldn't change the asymptotic complexity: parallelization multiplies the running time by a constant whose lower bound is 1/p where p is the number of processors, which doesn't change the "big oh" complexity class (O(1/p · f(n)) = O(f(n))).
In theory it's possible to design a hash function that can't (or isn't known to) be computable in linear time, but I'm not aware of any advantage such a design would have.
| Linux SHA1 Time Complexity |
1,421,872,935,000 |
I tried with openssl as it is very fast:
openssl sha1 "$(basename "very_big_image.png")"
But this just substitutes the actual file for check summing.
|
If you mean the checksum of the string used as a file name, you need to pass the string to your preferred checksum tool:
$ printf 'very_big_image.png' | openssl sha1
(stdin)= a2f2cfa4c7042222ecd8d980e7b26e46ee0895e5
$ printf 'very_big_image.png' | md5sum
52846a1d6726e254756f47cfaf9e116a -
$ printf 'very_big_image.png' | sha512sum
6133cf578b5c9aa515e3670712641e17cb55c3d9f1403a07718bdcb0dd02f3f5711bf87f54e2cbee3890d842ff0acd7ac5e62cdc0cd4f1e3a1d92486c5c3fbe8 -
If you want the contents of the file, then pass the file name:
$ md5sum very_big_image.png
d3b07384d113edec49eaa6238ad5ff00 very_big_image.png
$ sha512sum very_big_image.png
0cf9180a764aba863a67b6d72f0918bc131c6772642cb2dce5a34f0a702f9470ddc2bf125c12198b1995c233c34b4afd346c54a2334c350a948a51b6e8b4e6b6 very_big_image.png
Some commonly available hashing tools are:
md5sum
sha224sum
sha256sum
sha384sum
sha512sum
| How to calculate the checksum of a file's name? |
1,421,872,935,000 |
I observe that when I use PIGZ version, generated tar file's md5sum hash is different than the next one generated.
Instead of PIGZ=-n if I use GZIP=-n generated hashes are same. I have followed following answer for Tar produces different files each time.
$ find sourceCode -print0 | LC_ALL=C sort -z | PIGZ=-n tar \
--mode=a+rwX --owner=0 --group=0 --absolute-names --no-recursion --null -T - -zcvf file.tar.gz
$ md5sum file.tar.gz # some hash is generated
# When I apply the same operation above output for md5sum file.tar.gz is different
=> Is this a normal case? or is it possible to have same behavior for PIGZ like GZIP?
|
If you want tar to use pigz, you need to ask it to do so:
... | PIGZ=-n tar -Ipigz --mode=a+rwX --owner=0 --group=0 --absolute-names --no-recursion --null -T - -cvf file.tar.gz
With the -Ipigz option, and without -z, tar uses pigz and the PIGZ variable is taken into account. This results in tarballs with the same contents as gzip-compressed archives with GZIP=-n.
| Why does the PIGZ produce a different md5sum |
1,421,872,935,000 |
The author wrote ssh public finngerprint in the webpage.
fingerprint
If you want to send me an email, my fingerprint is:
5029 E0D0 F458 72E4 09D3 308D 1D51 378E E348 35B6
Now i make a verification.
For public RSA (SSH) key:
wget https://www.bjornjohansen.com/pubkey.txt
ssh-keygen -l -E md5 -f pubkey.txt
2048 MD5:f4:cd:6d:0f:0c:16:20:ea:f7:bc:c0:36:b9:29:16:c3bjornjohansen@Endor (RSA)
For OpenPGP (GnuPG) public key:
wget https://www.bjornjohansen.com/E34835B6.asc
ssh-keygen -l -E md5 -f E34835B6.asc
E34835B6.asc is not a public key file.
What i got is totally different from the author wrote in webpage.
I can't send a message via twitter.
Is my method wrong or the author paste an outdated md5 value in his webpage?
|
Your understanding isn't correct. Checking the SSH public key is irrelevant. The email fingerprint is for the PGP Public key which would require you to use GnuPG (aka gpg) to validate the fingerprint.
wget https://www.bjornjohansen.com/E34835B6.asc
gpg --import E34835B6.asc
gpg --fingerprint 5029E0D0F45872E409D3308D1D51378EE34835B6
The fingerprint is from above and will pull up Bjorn's public key for you. I should also mention, Bjorn's PGP Public Key is also expired as of 2018.
| Get different md5 value when to verify ssh public key |
1,421,872,935,000 |
I'm trying to store the MD5 of a variable into another variable. Between backticks and the more modern () notation, I cannot figure out how to assign the value of a variable run through a command to another variable. Sample code:
#!/bin/bash
backup_dir=$(date +%Y-%m-%d_%H-%M-%S)
hashed=$( ${backup_dir} | md5)
Here, the hashed variable doesn't work, it takes the literal string backup_dir and hashes that. So the hash is always the same. Any thoughts?
Thanks!
|
You are expecting md5 to read the value of the backup_dir variable and return its MD5 hash sum.
The command pipeline
${backup_dir} | md5
would try to run $backup_dir as a command, piping its output to md5. I would expect a "command not found" error from this, along with the MD5 hash of the empty string (d41d8cd98f00b204e9800998ecf8427e) in $hashed.
Instead, you would need to use something like
printf '%s' "$backup_dir" | md5
to give md5 the value on its standard input stream.
You could also use echo "$backup_dir" | md5 or md5 <<<"$backup_dir", but note that this adds a newline to the end of the value of $backup_dir which would alter the hash.
If md5 is the md5 utility commonly found on BSD and BSD-like systems (e.g. macOS), then you should use
md5 -q -s "$backup_dir"
The -s option takes a string as its argument, and -q causes md5 to only print out the hash of that string and nothing else.
Summary:
#!/bin/bash
backup_dir=$(date +%Y-%m-%d_%H-%M-%S)
hashed=$(md5 -q -s "$backup_dir")
| In a bash script, how to use a variable inside a command and assign to a new variable? |
1,421,872,935,000 |
This recursive md5sum check for 40000 items of 11.8 GB takes 2 minutes:
ret=$(find "${target}"/ -name ".md5sum" -size +0 | while read aFile; do cd "${aFile%/*}"; md5sum -c ".md5sum"; done | grep -v "OK";)
Can any obvious speed improvements be made, that I have not noticed?
|
Not really.
Unless you decide to forgo the check altogether if size+timestamp matches, there is little to optimize if the checksums actually match; the files will all be identical but to verify that, you actually have to read all of it and that just takes time.
You could reduce the number of md5sum calls to a single one by building a global MD5SUMS file that contains all the files. However, since the bottleneck will be disk I/O there will be not much difference in speed...
You can optimize a little if files are actually changed.
If file sizes change, you could record file sizes too and not have to check the md5sum, because a different size will automatically mean a changed md5sum.
Rather than whole file checksums, you could do chunk based ones so you can stop checking for differences in a given file if there already is a change early on. So instead of reading the entire file you only have to read up until the first changed chunk.
| Improve execution time for recursive md5sum check? |
1,421,872,935,000 |
What is a good way to perform a SHA256d hash (double SHA256) on the default terminal of an network isolated OpenBSD fresh install?
Here's what I'm doing:
echo test > testfile
cat testfile | openssl dgst -binary | openssl dgst
It gives me a number ending in 0xe0b6
Just wondering if there is a more concise/otherwise better way?
|
openssl dgst -binary testfile | sha256
... is shorter, but there is no built-in way of applying the SHA256 hash twice on some input in any of the base utilities on OpenBSD.
| What is a good way to perform a SHA256d hash (double SHA256) on an OpenBSD fresh install? |
1,421,872,935,000 |
An external USB flash drive containing 2 partitions is connected to my Raspberry Pi.
I want to dd an image file to this external flash drive if the first partition on the flash drive is not the same as the first one of the image file.
To achieve that, I will compare their checksum.
It's easy to compute the checksum of the flash drive's partition :
md5sum /dev/sda1
However, how to compute the checksum of the first partition stored in the image file ?
I use Debian 10 operating system.
|
You can use losetup --partscan --find --show /path/to/disk.img to set up one or more loop devices (/dev/loopN) for the image and its partitions
Example
dd if=/dev/zero bs=1M count=100 >/tmp/100M.img
pimg() { parted /tmp/100M.img --align optimal unit MiB "$@"; }
pimg mklabel gpt
pimg mkpart primary 2 50
pimg mkpart primary 51 100%
pimg print
Model: (file)
Disk /tmp/100M.img: 100MiB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 2.00MiB 50.0MiB 48.0MiB primary
2 51.0MiB 100MiB 49.0MiB primary
lo=$(losetup --partscan --find --show /tmp/100M.img); echo $lo
/dev/loop0
ls ${lo}*
/dev/loop0 /dev/loop0p1 /dev/loop0p2
Rather than generate two checksums, which requires both your existing target and the source image to be read in their entirety, use cmp. This may be more efficient that generating a checksum as it will stop as soon as there is a byte mismatch. (If there is none it will read to the end, but that's no slower than your alternative of using md5sum.)
cmp /dev/loop0p1 /path/to/existing_image
If you prefer to continue using a tool such as md5sum you can use the loop device we created earlier:
md5sum /dev/loop0p1
Don't use dd - it's at least as quick to use cat or pv, and a lot easier too (no fiddly options to remember that can corrupt data if you get them wrong):
pv /dev/loop0p1 >/path/to/existing_image
| Compute checksum of a partition inside an image file |
1,421,872,935,000 |
Example I have these files
/sdcard/testfolder/file1
/sdcard/testfolder/file2
/sdcard/testfolder/file3
/sdcard/testfolder/file4.ext
I would like to create .sha256 files for each
/sdcard/testfolder/file1.sha256
/sdcard/testfolder/file2.sha256
/sdcard/testfolder/file3.sha256
/sdcard/testfolder/file4.ext.sha256
The method should work on every possible valid character in the file names and folder names
My starting point was to try and use find
NOTE : I am using find from "toybox 0.8.0-android" and cannot change it (immutable unrootable file system) however it does seem to be fully featured. My shell is "MirBSD Korn Shell" 2014 https://launchpad.net/mksh and I also cannot change it
find /sdcard/testfolder -type file -exec echo {} \;
for example returns the file list, one file per line
So one way to do this would be to replace 'echo {}' with equivalent to
sha256sum /sdcard/testfolder/file4.ext > /sdcard/testfolder/file4.ext.sha256
Maybe something like
find /sdcard/testfolder -type file -exec sha256sum {} > {}.sha256 \;
Unfortunately, find -exec does not work with that specific syntax.
Looking at an extensive list of find command example
https://sysaix.com/43-practical-examples-of-linux-find-command
I does seem the -exec command parameter1 {} parameter2 ;
is the only format but I could be wrong ?
If possible I would like to keep this as single command
Another avenue might be to pipe into another command, however I can't find how to refer to the filename from the pipe as a command line argument, maybe not possible ?
find /sdcard/testfolder -type file | sort | sha256sum $filename? > $filename?.sha256
|
What I would do:
find /sdcard/testfolder -not -name '*sha256' -type file -exec sh -c '
sha256sum "$1" > "$1.sha256"
' sh {} \;
| How to recursively create .sha256 hash files for every file in a folder? |
1,421,872,935,000 |
On a centos 7 I'm trying to arhive and do a checksum over a directory but at the end the files are empty.
localpath=/backup
name=$(date '+%Y-%m-%d')
tar cvzf $localpath/BackUp$name.tgz $localpath/BackUp* | md5sum $localpath/BackUp$name.tgz > $localpath/checksum$name
Can you please advise on what I'm doing wrong ?
|
A pipe, |, is used for sending the output of the command on the left-hand side to the input of the command on the right-hand side. The commands on the left and right-hand side are started concurrently, and it is only the writing and reading from the left to the right that synchronises the two parts of the pipeline.
In this case, the tar command is not outputting anything that md5sum should read, and md5sum is given a filename to process, so it wouldn't read its standard input stream anyway.
What you probably want to do is to not use a pipe and instead invoke md5sum once the tar command has created your archive.
tar -vz -c -f "$localpath/BackUp$name.tgz" some files
md5sum "$localpath/BackUp$name.tgz" >"$localpath/BackUp$name.md5"
| tar archive and checksum empty |
1,421,872,935,000 |
I have 2 ~55 Gb folders (with many subfolders) that contain more than 1500 files per ~30 Mb.
I need to compare them and get information if some files are missing / new files persist or their hash is not identical to original content.
How I can do it?
|
You can try something like:
cd path1
find . -type f -exec sha1sum {} \; >/var/tmp/sum.path1
cd path2
sha1sum -c /var/tmp/sum.path1|grep -v "OK$"
(the grep remove lines with OK at the end to display only failed missing/different hash)
And you can change the hash algorithm to try to minimize the collision factor
| How to diff folders and get verbose information? |
1,421,872,935,000 |
there are multiple checksum commands in the GNU Core Utilities in most Linux distros for famous hashing algorithms like SHA2 or MD5, But I need a command for MD6 (message digest 6) checksum that can generate 128/256/512 bit size md6 hashes.
actually some programming languages has support for md6 in some of their libraries but I need a Linux command or at least a python library that supports md6 because most hash libraries don't
|
GUI: https://github.com/tristanheaven/gtkhash
Python: https://pypi.org/project/md6/#files
PHP/JS: https://github.com/Snack-X/md6 | https://github.com/Neo-Desktop/md6 | https://github.com/Richienb/md6-hash
Rust: https://docs.rs/md6/2.0.2/md6/
Perl: https://github.com/AndyA/Digest--MD6
Swift: https://github.com/ImKcat/CatCrypto
| MD6 hash generation linux command for 128-bit, 256-bit, 512-bit MD6 hashes |
1,421,872,935,000 |
According to the manual of argon2 (Debian package), it says to pass the password from standard input. However, when I follow the instructions and attempt
echo -n "password" | argon2 salt "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"-t 4 -e
the program simply returns Error: unknown argument.
What am I missing here?
The manual says
The supplied salt (the first argument to the command) must be at least 8 octets in length, and the password is supplied on standard input.
|
The first argument, the salt value, should be the actual salt that you want to use. Therefore, your command should probably look like
echo -n "password" |
argon2 "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" -t 4 -e
if the string of a characters is your salt. Note also the space between the salt string and the -t option.
This literal command would output
$argon2i$v=19$m=4096,t=4,p=1$YWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYQ$9rVLOMSIM9ehkD8zj0aK62CZhchXpDxV/gKcBUQCnbQ
| Unable to pass argument to argon2 command |
1,421,872,935,000 |
I use this procedure
openssl req -newkey rsa:2048 -sha256 -subj "/C=IT/ST=Lazio/L=Roma/O=Blu/CN=server.server.server" -keyout ssl.key -out ssl.req -passout file:"/root/pass" ;done
#sign certificate
openssl ca -passin file:"/root/pass" -out key.crt -infiles ssl.req ;done
#removepass
for i in *key;do openssl rsa -in $i -out $i -passin file:"/root/pass" ;done
I have added -sha256,but generate a sha1
|
Solution found
on openssl.cnf
default_days = 1000 # how long to certify for
default_crl_days= 30 # how long before next CRL
default_md = default # use public key default MD
preserve = no # keep passed DN ordering
become
default_days = 1000 # how long to certify for
default_crl_days= 30 # how long before next CRL
default_md = sha256 # use public key sha256
preserve = no # keep passed DN ordering
and then work with
openssl req -nodes -sha256 -newkey rsa:2048 -subj "/C=IT/ST=Lazio/L=Roma/O=Blu/CN=server.server.server" -keyout ssl.key -out ssl.req -passout file:"/root/pass"
| Want a sha256 ssl cert,but i get sha1,why? |
1,421,872,935,000 |
I am trying to follow procedures at
How To Create Your Own Odin Flashable TAR or TAR.MD5
And the command md5sum -t your_odin_package.tar >> your_odin_package.tar does not work for me. That is, when I try to validate that I have a authenticated file, I get an error.
md5sum -t your_odin_package.tar >> your_odin_package.tar
Should I be doing this differently? I tried to use the file on the phone, and I got an error about authentication not working also.
I tried this also:
tar -H ustar -c aboot.mbn sbl1.mbn rpm.mbn tz.mbn sdi.mbn NON-HLOS.bin boot.img recovery.img system.img.ext4 cache.img.ext4 modem.bin > N900PVPUCNC5_N900PSPTCNC5_N900PVPUCNC5_HOME.tar
sansari@ubuntu:~/stock3$ mv N900PVPUCNC5_N900PSPTCNC5_N900PVPUCNC5_HOME.tar N900PVPUCNC5_N900PSPTCNC5_N900PVPUCNC5_HOME.tar.$(md5sum abc.tar | cut -d ' ' -f 1)
md5sum: abc.tar: No such file or directory
sansari@ubuntu:~/stock3$ mv N900PVPUCNC5_N900PSPTCNC5_N900PVPUCNC5_HOME.tar N900PVPUCNC5_N900PSPTCNC5_N900PVPUCNC5_HOME.tar.$(md5sum N900PVPUCNC5_N900PSPTCNC5_N900PVPUCNC5_HOME.tar | cut -d ' ' -f 1)
md5sum: N900PVPUCNC5_N900PSPTCNC5_N900PVPUCNC5_HOME.tar: No such file or directory
mv: cannot stat ‘N900PVPUCNC5_N900PSPTCNC5_N900PVPUCNC5_HOME.tar’: No such file or directory
|
By adding the md5sum to the file, the md5sum of the file content changes. More usual is to keep the md5sum in a separate file, or change the filename to include the md5sum:
mv abc.tar abc.tar.$(md5sum abc.tar | cut -d ' ' -f 1)
There are files that store a checksum in the file (somewhere in the header, or at the end) This relies on the program that checks this to know where the checksum is and not incorporate it in calculating the checksum itself.
You should IMO not use -t on a .tar file.
| How do I add the hash of a file to file itself |
1,421,872,935,000 |
I have two files on ftp location in csv.gz format and their checksum is in .csv.gz.md5 format. I am copying this file in my local system. I am generating check sum for it through md5sum. Now I am comparing it against the copied file.
Now I want to identify any error in a file if there is one and also which file is having an error.
Please help me.
|
If csv.gz.md5 was generated using md5sum csv.gz > csv.gz.md5, then you can check using md5sum -c cvs.gz.md5.
$ echo Hello World > something.abc
$ md5sum something.abc > something.abc.md5
$ md5sum -c something.abc.md5 && echo YAY || echo NAY
something.abc: OK
YAY
$ echo Garbage >> something.abc
$ md5sum -c something.abc.md5 && echo YAY || echo NAY
something.abc: FAILED
md5sum: WARNING: 1 computed checksum did NOT match
NAY
| How to identify error in a particular file while checksum verification (which file having problem while verification) in shell script |
1,421,872,935,000 |
Why do I become a different hash when I try:
md5 <<< "Hello"
md5 -s "Hello"
Is it because of a possible line break in the first example?
|
Heredocs (thats what <<< is called) in bash always end with a newline character. There is no way to disable this behavior. This newline character is what is throwing off the checksum.
| md5 String and File different |
1,421,872,935,000 |
I'm trying to synchronize literally thousands of files of various sizes and I would like to have a 1:1 copy of the files. That means that already present files should be checked for their integrity and if there's a wrong checksum, the file needs to be overwritten. A so-called delta transfer is only necessary at this point because of the partially failed transfer.
Apparently my mount is kinda unstable and it fails after 300-400GB of transfer using cp or rsync.
I did the following before this:
I mounted the storage, and did cp -r src dest, it failed after like 300GB because the mount dropped and it errored out (don't have the error anymore apparently)
I mounted the storage again and did rsync -aP src dest, it failed after like 400GB with rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1338) [sender=3.2.7] because the mount failed again. Considering the file size it probably overwrite most of the files.
I checked my kernel log and found nothing (sudo dmesg)
I found a reconnect flag for my mount, but it would not be instant.
There's a rsync flag named -c which calculates the checksums, but does it do a so-called delta transfer too or do I need to add more flags?
How could I best fix this problem at hand?
UPDATE 1
Correct me if I'm wrong but I think the issue at hand was that the storage had the owners and groups of different users and groups than in rsync. To elaborate: cp -r copied the files and changed their ownership and group ownership to the user copying, whereas rsync seems to copy the file 1:1 with the same user and group ownership... That's probably why the transfer was overwriting old files...
|
You sound like you are copying between two local filesystems. If so then rsync will not use its checksum scheme and will fall back to being cp.
Both cp and rsync will preserve owner/group if they are run as root. Otherwise the destination files will take the owner and (default) group of the account performing the copy.
Provided you copied timestamps with your original cp you can tell rsync just to check the file size and timestamp and use that to decide whether to update missing metadata or to recopy the file in its entirety:
rsync -rt src/ dst # Timestamps only
rsync -a src/ dst # Almost all metadata
Certain filesystems may not be able to preserve metadata. Or you may not be running as root. In these situations you can try to save metadata alongside each file in its extended attributes. You'd add -X for that:
rsync -aX src/ dst # Almost all metadata
| How do you synchronize files with partially failed download using rsync? |
1,421,872,935,000 |
I have a disk which is perhaps broken. I want to write random data to the disk and later verify the md5 checksum.
I write to the disk like this:
dd if=/dev/urandom of=/dev/sda bs=4M status=progress
How to create the md5 checksum while writing to the disk at the same time? I want to see the md5 checksum of the written random data when dd finishes. Also I want to see the progress while writing to the disk.
I have read this post and I created this command:
pv /dev/urandom >(md5sum) > /dev/sdXXX
The problem is it fills up my whole RAM. I got 32GB RAM.
|
Rather than write your own solution you can use a standard scan utility.
badblocks -w -s /dev/sda
will scan the entire disk, writing patterns to each individual block then reading the block back and comparing the results. Progress is displayed during the scan. You can also specify a number of passes if the default single pass doesn't seem like enough.
| How to calculate the checksum while writing random data to a disk? |
1,421,872,935,000 |
in folder , we have the following HADOOP binary files and their size (BYTES)
du -sb * | grep HADOOP[a-z]
334542327 HADOOPaa
334542327 HADOOPab
334542327 HADOOPac
334542327 HADOOPad
334542327 HADOOPae
334542327 HADOOPaf
334542327 HADOOPag
334542327 HADOOPah
334542327 HADOOPai
334542327 HADOOPaj
334542327 HADOOPak
334542327 HADOOPal
334542327 HADOOPam
334542327 HADOOPan
334542327 HADOOPao
334542327 HADOOPap
334542327 HADOOPaq
334542327 HADOOPar
334542327 HADOOPas
334542327 HADOOPat
334542327 HADOOPau
334542327 HADOOPav
334542327 HADOOPaw
334542327 HADOOPax
334542327 HADOOPay
334542327 HADOOPaz
334542327 HADOOPba
334542327 HADOOPbb
932542327 HADOOPbc
334542327 HADOOPbd
334542327 HADOOPbe
434542327 HADOOPbf
934542327 HADOOPbg
108883803 HADOOPbh
by awk we success to sum all the numbers to total size in bytes
example
du -sb * | grep HADOOP[a-z] | awk '{ sum+=$1} END {print sum}'
now we want to do the same with md5
we try
md5sum * | grep HADOOP[a-z] | md5sum | awk '{print $1}'
2a85626137ae7d689b85e8e04e8a2523 -
but not so good and not so elegant , because we want only the sum of all md5 files ( left side is the md5 for each file ) that match HADOOP[a-z]
any suggestions?
|
Not sure what you're going for here... but it sounds like you want awk (or cut) after the grep to only print the sums. But then a checksum of checksums to ensure you have all the files? Is that the end result you wanted?
BTW, I'm almost positive the glob md5sum * returns a random order, so you probably want a sort in there somewhere to ensure it's the same each time and repeatable across machines.
| calculate the total md5 of specific match files |
1,421,872,935,000 |
I am in need of creating and checking checksums between a local file, and the remote file I just pushed. If the MD5 checks, continue, else break. This needs to be in KORN shell scripting because we are using AIX machines.
here's the code I have so far:
for file in <<Directory>>; do
-- Get MD5 of local file
LOCALMD5=!chsum "$(basename "$file")"
sftp <<USER>>@<<IP>> <<EOF
PUT file <<SFTP OUTPUT FOLDER>>
REMOTEMD5= <<<COMMAND HERE>>>> <<--- Which command?
IF [[LOCALMD5!=REMOTEMD5]]; THEN
RETURNVALUE = -1
BREAK
done
print RETURNVALUE
How do I get the remote MD5 checksum?
|
Since you say: the remote file I just pushed, the probability of any file difference is extremely low over sftp (based on ssh code). As low as (in the order of) the probability that the md5 of two different files have the same hashsum.
And, the short answer is:
An sftp session doesn't allow remote execution of commands. So, if you cannot ssh to the machine, you don't have a way to remotely run md5.
So, to check a remote file you will need to read it back, which, IMhO seems silly.
The only way is, then, to do:
ssh user@remote-dns-name
And once in the shell it opens, execute the command(s) you need:
$ cd path/to/file
$ csum -h MD5 >MD5-hashsum-filename
And then, copy the file created back to the local machine.
| Using korn shell to compare local and remote MD5 over sftp |
1,478,993,911,000 |
I am renting a server, running Ubuntu 16.04 at a company, let's name it company.org.
Currently, my server is configured like this:
hostname: server737263
domain name: company.org
Here's my FQDN:
user@server737263:~ $ hostname --fqdn
server737263.company.org
This is not surprising.
I am also renting a domain name, let's name it domain.org. What I would like to do would be to rename my server as server1.domain.org.
This means configuring my hostname as server1 and my domain name as domain.org.
How can I do it correctly?
Indeed, the manpage for hostname is not clear. To me at least:
HOSTNAME(1)
[...]
SET NAME
When called with one argument or with the --file option, the commands set the host name or the NIS/YP domain name. hostname uses
the sethostname(2) function, while all of the three domainname,
ypdomainname and nisdomainname use setdomainname(2). Note, that this
is effective only until the next reboot. Edit /etc/hostname for
permanent change.
[...]
THE FQDN
You cannot change the FQDN with hostname or dnsdomainname.
[...]
So it seems that editing /etc/hostname is not enough? Because if it really changed the hostname, it would have changed the FQDN. There's also a trick I read to change the hostname with the command sysctl kernel.hostname=server1, but nothing says whether this is the correct way or an ugly trick.
So:
What is the correct way to set the hostname?
What is the correct way to set the domain name?
|
Setting your hostname:
You'll want to edit /etc/hostname with your new hostname.
Then, run sudo hostname $(cat /etc/hostname).
Setting your domain, assuming you have a resolvconf binary:
In /etc/resolvconf/resolv.conf.d/head, you'll add then line domain your.domain.name (not your FQDN, just the domain name).
Then, run sudo resolvconf -u to update your /etc/resolv.conf (alternatively, just reproduce the previous change into your /etc/resolv.conf).
If you do not have resolvconf, just edit /etc/resolv.conf, adding the domain your.domain.name line.
Either way:
Finally, update your /etc/hosts file. There should be at least one line starting with one of your IP (loopback or not), your FQDN and your hostname. grepping out ipv6 addresses, your hosts file could look like this:
127.0.0.1 localhost
1.2.3.4 service.domain.com service
In response to hostnamectl suggestions piling up in comments: it is not mandatory, nor exhaustive.
It can be used as a replacement for step 1 & 2, IF you OS ships with systemd. Whereas the steps given above are valid regardless of systemd being present (pclinuxos, devuan, ...).
| How to correctly set hostname and domain name? |
1,478,993,911,000 |
I can't seem to change the hostname on my CentOS 6.5 host.
I am following instructions I found on this (now defunct) page.
I set my /etc/hosts like so ...
[root@mig-dev-006 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain
192.168.32.128 ost-dev-00.domain.example ost-dev-00
192.168.32.129 ost-dev-01.domain.example ost-dev-01
... then I make my /etc/sysconfig/network file like so ...
[root@mig-dev-006 ~]# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=ost-dev-00.domain.example
NTPSERVERARGS=iburst
... then I run hostname like so ...
[root@mig-dev-006 ~]# hostname ost-dev-00.domain.example
... and then I run bash and all seems well ...
[root@mig-dev-006 ~]# bash
... but when I restart my network the old hostname comes back:
[root@ost-dev-00 ~]# /etc/init.d/network restart
Shutting down interface eth0: Device state: 3 (disconnected)
[ OK ]
Shutting down loopback interface: [ OK ]
Bringing up loopback interface: [ OK ]
Bringing up interface eth0: Active connection state: activating
Active connection path: /org/freedesktop/NetworkManager/ActiveConnection/6
state: activated
Connection activated
[ OK ]
[root@ost-dev-00 ~]# bash
[root@mig-dev-006 ~]#
|
to change the hostname permanently, you need to change it in two places:
vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=newHostName
and:
a good idea if you have any applications that need to resolve the IP of the hostname)
vi /etc/hosts
127.0.0.1 newHostName
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
and then
rebooting the system
| How to change hostname on CentOS 6.5? |
1,478,993,911,000 |
All the results of my searches end up having something to do with hostname or uname -n. I looked up the manual for both, looking for sneaky options, but no luck.
I am trying to find an equivalent of OSX's scutil --get ComputerName on Linux systems. On Mac OS X, the computer name is used as a human-readable identifier for the computer; it's shown in various management screens (e.g. on inventory management, Bonjour-based remote access, ...) and serves as the default hostname (after filtering to handle spaces etc.).
|
The closest equivalent to a human-readable (and human-chosen) name for any computer running Linux is the default hostname stored in /etc/hostname. On some (not all) Linux distributions, this name is entered during installation as the computee’s name (but with network hostname constraints, unlike macOS’s computer name). This can be namespaced, i.e. each UTS namespace can have a different hostname.
Systems running systemd distinguish three different hostnames, including a “pretty” human-readable name which is supposed to be descriptive in a similar fashion to macOS’s computer name; this can be set and retrieved using hostnamectl’s --pretty option. The other two hostnames are the static hostname, which is the default hostname described above, and the transient hostname which reflects the current network configuration.
Systemd also supports a chassis type (e.g. “tablet”) and an icon for the host; see systemd-hostnamed.service.
| How to get the computer name (not its hostname)? |
1,478,993,911,000 |
I have a domain setup to point to my LAN's external IP using dynamic DNS, because my external IP address changes frequently. However, I want to create an alias to this host, so I can access it with home. So I appended the following to my /etc/hosts:
example.com home
However, it doesn’t seem to like the domain name. If I change it to an IP:
0.0.0.0 home
then it works, but of course this defeats the purpose of dynamic DNS!
Is this possible?
|
The file /etc/hosts contains IP addresses and host names only. You cannot alias the string "home" in the way that you want by this method.
If you were running your own DNS server you'd be able to add a CNAME record to make home.example.com an alias for domain.example, but otherwise you're out of luck.
The best thing you could do is use the same DNS client to update a fully-qualified name.
| Creating alias to domain name with /etc/hosts |
1,478,993,911,000 |
I accidentally typed
ssh 10.0.05
instead of
ssh 10.0.0.5
and was very surprised that it worked. I also tried 10.005 and 10.5 and those also expanded automatically to 10.0.0.5. I also tried 192.168.1 and that expanded to 192.168.0.1. All of this also worked with ping rather than ssh, so I suspect it would work for many other commands that connect to an arbitrary user-supplied host.
Why does this work? Is this behavior documented somewhere? Is this behavior part of POSIX or something? Or is it just some weird implementation? (Using Ubuntu 13.10 for what it's worth.)
|
Quoting from man 3 inet_aton:
a.b.c.d Each of the four numeric parts specifies a byte of the
address; the bytes are assigned in left-to-right order to
produce the binary address.
a.b.c Parts a and b specify the first two bytes of the binary
address. Part c is interpreted as a 16-bit value that
defines the rightmost two bytes of the binary address.
This notation is suitable for specifying (outmoded) Class B
network addresses.
a.b Part a specifies the first byte of the binary address.
Part b is interpreted as a 24-bit value that defines the
rightmost three bytes of the binary address. This notation
is suitable for specifying (outmoded) Class C network
addresses.
a The value a is interpreted as a 32-bit value that is stored
directly into the binary address without any byte
rearrangement.
In all of the above forms, components of the dotted address can be
specified in decimal, octal (with a leading 0), or hexadecimal, with
a leading 0X). Addresses in any of these forms are collectively
termed IPV4 numbers-and-dots notation. The form that uses exactly
four decimal numbers is referred to as IPv4 dotted-decimal notation
(or sometimes: IPv4 dotted-quad notation).
For fun, try this:
$ nslookup unix.stackexchange.com
Non-authoritative answer:
Name: unix.stackexchange.com
Address: 198.252.206.140
$ echo $(( (198 << 24) | (252 << 16) | (206 << 8) | 140 ))
3338456716
$ ping 3338456716 # What? What did we ping just now?
PING stackoverflow.com (198.252.206.140): 48 data bytes
64 bytes from 198.252.206.140: icmp_seq=0 ttl=52 time=75.320 ms
64 bytes from 198.252.206.140: icmp_seq=1 ttl=52 time=76.966 ms
64 bytes from 198.252.206.140: icmp_seq=2 ttl=52 time=75.474 ms
| How is it that missing 0s are automatically added in IP addresses? (`ping 10.5` equivalent to `ping 10.0.0.5`) |
1,478,993,911,000 |
I've heard that changing the hostname in new versions of fedora is done with the hostnamectl command. In addition, I recently (and successfully) changed my hostname on Arch Linux with this method. However, when running:
[root@localhost ~]# hostnamectl set-hostname --static paragon.localdomain
[root@localhost ~]# hostnamectl set-hostname --transient paragon.localdomain
[root@localhost ~]# hostnamectl set-hostname --pretty paragon.localdomain
The changes are not preserved after a reboot (contrary to many people's claims that it does). What is wrong?
I really don't want to edit /etc/hostname manually.
I should also note that this is a completely stock fedora. I haven't even gotten around to installing my core apps yet.
|
The command to set the hostname is definitely, hostnamectl.
root ~ # hostnamectl set-hostname --static "YOUR-HOSTNAME-HERE"
Here's an additional source that describes this functionality a bit more, titled: Correctly setting the hostname - Fedora 20 on Amazon EC2.
Additionally the man page for hostnamectl:
HOSTNAMECTL(1) hostnamectl HOSTNAMECTL(1)
NAME
hostnamectl - Control the system hostname
SYNOPSIS
hostnamectl [OPTIONS...] {COMMAND}
DESCRIPTION
hostnamectl may be used to query and change the system hostname and
related settings.
This tool distinguishes three different hostnames: the high-level
"pretty" hostname which might include all kinds of special characters
(e.g. "Lennart's Laptop"), the static hostname which is used to
initialize the kernel hostname at boot (e.g. "lennarts-laptop"), and
the transient hostname which is a default received from network
configuration. If a static hostname is set, and is valid (something
other than localhost), then the transient hostname is not used.
Note that the pretty hostname has little restrictions on the characters
used, while the static and transient hostnames are limited to the
usually accepted characters of Internet domain names.
The static hostname is stored in /etc/hostname, see hostname(5) for
more information. The pretty hostname, chassis type, and icon name are
stored in /etc/machine-info, see machine-info(5).
Use systemd-firstboot(1) to initialize the system host name for mounted
(but not booted) system images.
There is a bug in Fedora 21 where SELinux prevents hostnamectl access, found here, titled: Bug 1133368 - SELinux is preventing systemd-hostnam from 'unlink' accesses on the file hostname.
This bug seems to be related. There's an issue with the SELinux contexts not being applied properly to the file /etc/hostname upon installation. This manifests in the tool hostnamectl not being able to manipulate the file /etc/hostname. That same thread offered this workaround:
$sudo restorecon -v /etc/hostname
NOTE: That patches were applied to Anaconda (the installation tool) so that this issue should go away in the future for new users.
| How to permanently change hostname in Fedora 21 |
1,478,993,911,000 |
As opposed to editing /etc/hostname, or wherever is relevant?
There must be a good reason (I hope) - in general I much prefer the "old" way, where everything was a text file. I'm not trying to be contentious - I'd really like to know, and to decide for myself if it's a good reason.
Thanks.
|
Background
hostnamectl is part of systemd, and provides a proper API for dealing with setting a server's hostnames in a standardized way.
$ rpm -qf $(type -P hostnamectl)
systemd-219-57.el7.x86_64
Previously each distro that did not use systemd, had their own methods for doing this which made for a lot of unnecessary complexity.
DESCRIPTION
hostnamectl may be used to query and change the system hostname and
related settings.
This tool distinguishes three different hostnames: the high-level
"pretty" hostname which might include all kinds of special characters
(e.g. "Lennart's Laptop"), the static hostname which is used to
initialize the kernel hostname at boot (e.g. "lennarts-laptop"), and the
transient hostname which is a default received from network
configuration. If a static hostname is set, and is valid (something
other than localhost), then the transient hostname is not used.
Note that the pretty hostname has little restrictions on the characters
used, while the static and transient hostnames are limited to the
usually accepted characters of Internet domain names.
The static hostname is stored in /etc/hostname, see hostname(5) for
more information. The pretty hostname, chassis type, and icon name are
stored in /etc/machine-info, see machine-info(5).
Use systemd-firstboot(1) to initialize the system host name for mounted
(but not booted) system images.
hostnamectl also pulls a lot of disparate data together into a single location to boot:
$ hostnamectl
Static hostname: centos7
Icon name: computer-vm
Chassis: vm
Machine ID: 1ec1e304541e429e8876ba9b8942a14a
Boot ID: 37c39a452464482da8d261f0ee46dfa5
Virtualization: kvm
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-693.21.1.el7.x86_64
Architecture: x86-64
The info here is coming from /etc/*release, uname -a, etc. including the hostname of the server.
What about the files?
Incidentally, everything is still in files, hostnamectl is merely simplifying how we have to interact with these files or know their every location.
As proof of this you can use strace -s 2000 hostnamectl and see what files it's pulling from:
$ strace -s 2000 hostnamectl |& grep ^open | tail -5
open("/lib64/libattr.so.1", O_RDONLY|O_CLOEXEC) = 3
open("/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3
open("/proc/self/stat", O_RDONLY|O_CLOEXEC) = 3
open("/etc/machine-id", O_RDONLY|O_NOCTTY|O_CLOEXEC) = 4
open("/proc/sys/kernel/random/boot_id", O_RDONLY|O_NOCTTY|O_CLOEXEC) = 4
systemd-hostname.service?
To the astute observer, you should notice in the above strace that not all files are present. hostnamectl is actually interacting with a service, systemd-hostnamectl.service which in fact does the "interacting" with most of the files that most admins would be familiar with, such as /etc/hostname.
Therefore when you run hostnamectl you're getting details from the service. This is a ondemand service, so you won't see if running all the time. Only when hostnamectl runs. You can see it if you run a watch command, and then start running hostnamectl multiple times:
$ watch "ps -eaf|grep [h]ostname"
root 3162 1 0 10:35 ? 00:00:00 /usr/lib/systemd/systemd-hostnamed
The source for it is here: https://github.com/systemd/systemd/blob/master/src/hostname/hostnamed.c and if you look through it, you'll see the references to /etc/hostname etc.
References
systemd/src/hostname/hostnamectl.c
systemd/src/hostname/hostnamed.c
hostnamectl
systemd-hostnamed.service
| What's the point of the hostnamectl command? |
1,478,993,911,000 |
I try to find a script to decrypt (unhash) the ssh hostnames in the known_hosts file by passing a list of the hostnamses.
So, to do exactly the reverse of:
ssh-keygen -H -f known_hosts
Or also, to do the same as this if the ssh config HashKnownHosts is set to No:
ssh-keygen -R know-host.com -f known_hosts
ssh-keyscan -H know-host.com >> known_hosts
But without re-downloading the host key (caused by ssh-keyscan).
Something like:
ssh-keygen --decrypt -f known_hosts --hostnames hostnames.txt
Where hostnames.txt contains a list of hostnames.
|
Lines in the known_hosts file are not encrypted, they are hashed. You can't decrypt them, because they're not encrypted. You can't “unhash” them, because that what a hash is all about — given the hash, it's impossible¹ to discover the original string. The only way to “unhash” is to guess the original string and verify your guess.
If you have a list of host names, you can pass them to ssh-keygen -F and replace them by the host name.
while read host comment; do
found=$(ssh-keygen -F "$host" | grep -v '^#' | sed "s/^[^ ]*/$host/")
if [ -n "$found" ]; then
ssh-keygen -R "$host"
echo "$found" >>~/.ssh/known_hosts
fi
done <hostnames.txt
¹ In a practical sense, i.e. it would take all the computers existing today longer than the present age of the universe to do it.
| How to decrypt hostnames of a crypted .ssh/known_hosts with a list of the hostnames? |
1,478,993,911,000 |
I have several VMs and right now my command-line prompt looks like -bash-3.2$; identical on every VM, because it doesn't contain the host name.
I need to always see which VM I'm on using hostname before I do any operation. How can I add the host name to the shell prompt?
ENV:
CentOS/ssh
|
Just change the value of the $PS1 environment variable:
PS1="\h$ "
where \h is replaced with the hostname. Add that to /etc/bash.bashrc to set it permanent.
| How to show the host name in Linux commandline prompt |
1,478,993,911,000 |
How do I set the fully qualified hostname on CentOS 7.0?
I have seen a few posts online for example using:
$ sudo hostnamectl set-hostname nodename.domainname
However, running domainname returns nothing:
$ domainname
(none)
Also:
$ hostname
nodename.domainname
However,
$ hostname -f
hostname: Name or service not known
$ hostname -d
hostname: Name or service not known
Some debug output:
$ cat /etc/hostname
nodename.domainname
$ grep ^hosts /etc/nsswitch.conf
hosts: files dns
|
To set the hostname do use hostnamectl, but only with the hostname, like this:
hostnamectl set-hostname nodename
To set the (DNS) domainname edit /etc/hosts file and ensure that:
There is a line <machine's primary, non-loopback IP address> <hostname>.<domainname> <hostname> there
There are NO other lines with <some IP> <hostname>, and this includes lines with 127.0.0.1 and ::1 (IPv6) addresses.
Note that unless you’re using NIS, (none) is the correct output when running the domainname command.
To check if your DNS domainname is set correctly use dnsdomainname command and check output of hostname vs hostname -f (FQDN).
NIS vs. DNS domain
This issue confused me when I first came across it. It seems that the domainname command predates the popularity of the Internet. Instead of the DNS domain name, it shows or sets the system’s NIS (Network Information Service) aka YP (Yellow Pages) domain name (a group of computers which have services provided by a master NIS server). This command simply displays the name returned by the getdomainname(2) standard library function. (nisdomainname and ypdomainname are alternative names for this command.)
Display the FQDN or DNS domain name
To check the DNS (Internet) domain name, you should run the dnsdomainname command or hostname with the -d, --domain options. (Note that the dnsdomainname command can’t be used to set the DNS domain name – it’s only used to display it.)
To display the FQDN (Fully Qualified Domain Name) of the system, run hostname with the -f, --fqdn, --long options (likewise, this command can’t be used to set the domain name part).
The above commands use the system’s resolver (implemented by the gethostbyname(3) function from the standard library, as specified by POSIX) to determine the DNS domain name and the FQDN.
Name Resolution
In modern operating systems such as RHEL 7, the hosts entry in /etc/nsswitch.conf is used for resolving host names. In your CentOS 7 machine, this line is configured as (default for CentOS 7):
hosts: files dns
This means that when when the resolver functions look up hostnames or IP address, they first check for an entry in the /etc/hosts file and next try the DNS server(s) which are listed in /etc/resolv.conf.
When running hostname -f to obtain the FQDN of a host, the resolver functions try to get the FQDN for the system’s hostname. If the host is not listed in the /etc/hosts file or by the relevant DNS server, the attempt fails and hostname reports that Name or service not known.
When hostname -d is run to obtain the domain name, the same operations are carried out, and the domain name part is determined by stripping the hostname part and the first dot from the FQDN.
Configure the domain name
Update the relevant DNS name server
In my case, I had already added an entry for my new CentOS 7 machine in the DNS server for my local LAN so when the FQDN wasn’t found in the /etc/hosts file when I ran hostname with the -d or -f option, the local DNS services were able to fully resolve the FQDN for my new hostname.
Use the /etc/hosts file
If the DNS server haven’t been configured, the fully qualified domain name can be specified in the /etc/hosts file. The most common way to do this is to specify the primary IP address of the system followed by its FQDN and its short hostname. E.g.,
172.22.0.9 nodename.domainname nodename
Excerpt from hostname man page
You cannot change the FQDN with hostname or dnsdomainname.
The recommended method of setting the FQDN is to make the hostname be
an alias for the fully qualified name using /etc/hosts, DNS, or
NIS. For example, if the hostname was "ursula", one might have a line
in /etc/hosts which reads:
127.0.1.1 ursula.example.com ursula
Technically: The FQDN is the name getaddrinfo(3) returns for the host
name returned by gethostname(2). The DNS domain name is the part
after the first dot.
Therefore it depends on the configuration of the resolver (usually in
/etc/host.conf) how you can change it. Usually the hosts file is
parsed before DNS or NIS, so it is most common to change the FQDN in
/etc/hosts.
| How to set the fully qualified hostname on CentOS 7.0? |
1,478,993,911,000 |
Display filter in form ip.src_host eq my.host.name.com yields no matching packets, but there is traffic to and from this host.
DNS name is resolved successfully, and filters using ip addresses like ip.src eq 123.210.123.210 work as expected.
|
The problem might be that Wireshark does not resolve IP addresses to host names and presence of host name filter does not enable this resolution automatically.
To make host name filter work enable DNS resolution in settings. To do so go to menu "View > Name Resolution" And enable necessary options "Resolve * Addresses" (or just enable all of them if not sure :).
| How to filter by host name in Wireshark? |
1,478,993,911,000 |
I would like a command that will resolve a hostname to an IP address, in the same way that a normal program would resolve the hostname. In other words, it has to take into account mDNS (.local) and /etc/hosts, as well as regular DNS. So that rules out host, dig, and nslookup, since all three of those tools only use regular DNS and won't resolve .local addresses.
On Linux, the getent command does exactly what I want. However, getent does not exist on OS X.
Is there a Mac OS X equivalent of getent? I'm aware that I could write one in a few lines using getaddrinfo, and that's what I'll do if I have to, but I was just wondering if there was already a standard command that could do it.
Thanks!
|
I think dscacheutil is what you're looking for. It supports caching, /etc/hosts, mDNS (for .local).
dscacheutil -q host -a name foo.local
Another option is dns-sd
dns-sd -q foo.local
More information about dnscacheutil.
| Mac OS command to resolve hostnames like "getent" on Linux |
1,478,993,911,000 |
I logged in for the first time, opened terminal, and typed in ‘hostname’. It returned ‘localhost.localdomain.com’. Then I logged as the root user in terminal using the command, ‘su –‘, provided the password for the root user and used the command ‘hostname etest’ where etest is the hostname I’d like my machine to have. To test if I got my hostname changed correctly, I typed ‘hostname’ again at terminal and it returned etest.
However, when I restart my machine, the hostname reverts back to ‘localhost.localdomain.com’.
Here are the entire series of commands I used in terminal.
[thomasm@localhost ~]$ hostname
localhost.localdomain
[thomasm@localhost ~]$ su -
Password:
[root@localhost ~]# hostname etest
[root@localhost ~]# hostname
etest
I had run into the same problem when I set up RHEL and Ubuntu OS’s with VMPlayer.
|
On RHEL and derivatives like CentOS, you need to edit two files to change the hostname.
The system sets its hostname at bootup based on the HOSTNAME line in /etc/sysconfig/network. The nano text editor is installed by default on RHEL and its derivatives, and its usage is self-evident:
# nano /etc/sysconfig/network
You also have to change the name in the /etc/hosts file. If you do not, certain commands will suddenly start taking longer to run. They are trying to find the local host IP from the hostname, and without an entry in /etc/hosts, it has to go through the full network name lookup process before it can move on. Depending on your DNS setup, this can mean delays of a minute or so!
Having changed those two files, you can either run the hostname command to change the run-time copy of the hostname (which again, was set from /etc/sysconfig/network) or just reboot.
Ubuntu differs in that the static copy of the hostname is stored in /etc/hostname. For that matter, many aspects of network configuration are stored in different places and with different file formats on Ubuntu as compared to RHEL.
| How to change the hostname of a RHEL-based distro? |
1,478,993,911,000 |
When I install sendmail from the debian repos, I get the following output:
Disabling HOST statistics file(/var/lib/sendmail/host_status).
Creating /etc/mail/sendmail.cf...
Creating /etc/mail/submit.cf...
Informational: confCR_FILE file empty: /etc/mail/relay-domains
Informational: confCT_FILE file empty: /etc/mail/trusted-users
Updating /etc/mail/access...
Updating /etc/mail/aliases...
WARNING: local host name (ixtmixilix) is not qualified; see cf/README: WHO AM I?
Can someone please tell me what this means, what I need to do to qualify my hostname?
|
It's referring to this page from the readme, which tells you how to specify your hostname. It's warning you that your hostname won't work outside your local network; sendmail attaches your hostname as the sender of the message, but it's going to be useless on the other end because people outside your local network can't find the machine ixtmixilix. You should specify a hostname that can be resolved from anywhere, like ixtmixilix.example.com
| What is sendmail referring to here? |
1,478,993,911,000 |
It seems that wildcards are not supported in the /etc/hosts file.
What is the best solution for me to resolve all *.local domains to localhost?
|
You'd really need to run your own DNS server and use wildcards. Exactly how you'd do that would depend on the DNS package you ran.
| Wildcard in /etc/hosts file |
1,478,993,911,000 |
Am am currently visiting TU Wien and today I connected my Debian Linux laptop to their eduroam wlan using wpa_supplicant and the credentials of my home institute - as always when I am visiting another scientific institution.
When I opened a terminal I noticed that my command promt was showing a different host name, and in fact, excecuting hostname gave me e244-082.eduroam.tuwien.ac.at instead of the usual host name of my machine x301.
I am very puzzled by this. How on earth can it be possible that connecting to a wlan changes my host name without my consent?
|
Some DHCP servers send out host names. Clients can accept or ignore such offers.
Have a look at your local /etc/dhcp/dhclient.conf file to inspect your current configuration. There is a list of
request entities one of which will probably readhost-name. For more information check out the man page of dhclient.conf.
| Host name changed remotely by wifi? |
1,478,993,911,000 |
Heyo! I'm currently working on a non-lfs system from scratch with busybox as the star. Now, my login says:
(none) login:
Hence, my hostname is broken. hostname brings me (none) too.
The guide I was following told me to throw the hostname to /etc/HOSTNAME. I've also tried /etc/hostname. No matter what I do, hostname returns (none) - unless I run hostname <thename> or hostname -F /etc/hostname. Now obviously, I don't want this to be done every time somebody freshly installed the distro -- so what is the real default file, if not /etc/hostname?
Thanks in advance!
|
The hostname commands in common toolsets, including BusyBox, do not fall back to files when querying the hostname.
They report solely what the kernel returns to them as the hostname from a system call, which the kernel initializes to a string such as "(none)", changeable by reconfiguring and rebuilding the kernel.
(In systemd terminology this is the dynamic hostname, a.k.a. transient hostname; the one that is actually reported by Linux, the kernel.)
There is no "default file".
There's usually a single-shot service that runs at system startup, fairly early on, that goes looking in these various files, pulls out the hostname, and initializes the kernel hostname with it.
(In systemd terminology this configuration string is the static hostname.)
For example:
In my toolset I provide an "early" hostname service that runs the toolset's set-dynamic-hostname command after local filesystem mounts and before user login services. The work is divided into stuff that is done (only) when one makes a configuration change, and stuff that is done at (every) system bootstrap:
The external configuration import mechanism reads /etc/hostname and /etc/HOSTNAME, amongst other sources (since different operating systems configure this in different ways), and makes an amalgamated rc.conf.
The external configuration import mechanism uses the amalgamated rc.conf to configure this service's hostname environment variable.
When the service runs, set-dynamic-hostname doesn't need to care about all of the configuration source possibilities and simply takes the environment variable, from the environment configured for the service, and sets the dynamic hostname from it.
In systemd this is an initialization action that is hardwired into the code of systemd itself, that runs before service management is even started up. The systemd program itself goes and reads /etc/hostname (and also /proc/cmdline, but not /etc/HOSTNAME nor /etc/default/hostname nor /etc/sysconfig/network) and passes that to the kernel.
In Void Linux there is a startup shell script that reads the static hostname from (only) /etc/hostname, with a fallback to the shell variable read from rc.conf, and sets the dynamic hostname from its value.
If you are building a system "from scratch", then you'll have to make a service that does the equivalent.
The BusyBox and ToyBox tools for setting the hostname from a file are hostname -F "${filename}", so you'll have to make a service that runs that command against /etc/hostname or some such file.
BusyBox comes with runit's service management toolset, and a simple runit service would be something along the lines of: #!/bin/sh -e
exec 2>&1
exec hostname -F /etc/hostname
Further reading
Lennart Poettering et al. (2016). hostnamectl. systemd manual pages. Freedesktop.org.
Jonathan de Boyne Pollard (2017). "set-dynamic-hostname". User commands manual. nosh toolset. Softwares.
Jonathan de Boyne Pollard (2017). "rc.conf amalgamation". nosh Guide. Softwares.
Jonathan de Boyne Pollard (2015). "external formats". nosh Guide. Softwares.
Rob Landley. hostname. Toybox command list. landley.net.
https://unix.stackexchange.com/a/12832/5132
| What's the default file for `hostname`? |
1,478,993,911,000 |
I am running a RHEL 5.7 and the hostname command gives me the correct hostname.
But hostname -s and hostname -f return: Unknown host. Why?
|
(copied from one of my answers on SF)
The hostname command returns results from DNS and /etc/hosts.
hostname is equivilant to uname -n and is the actual "hostname" or "nodename" of the box.
All the other hostname arguments use this nodename to look up info.
So before going any further, I should explain the /etc/hosts file format.
The first field is fairly obvious, its the IP address all the hostnames on the line should resolve to. The second field is the primary hostname for that IP. The remaining fields are aliases.
So if you run hostname -f it will first try to resolve the IP for your nodename. Depending on how you have the hosts: entry configured in /etc/nsswitch.conf this method will vary.
If you have it configured to use dns, it will use the search domains configured in /etc/resolv.conf until it gets an IP back from DNS.
If you have it configured to use files it will look in /etc/hosts to find a line where either the primary hostname or the alias name is your current nodename (uname -n), and then return the IP address in that line.
Once it has the IP it will then try a reverse lookup on that IP. Again it will use DNS for this and your hosts file based on your nsswitch.conf. In the case of using your hosts file, it will return the primary entry (which is the first field after the IP in the file).
hostname -a will only work with the hosts file since doing a reverse lookup in DNS only gives you 1 result. With the hosts file it return the alises in the matching line (which is everything after the first entry, the primary hostname).
So in short, the likely reason for your issue is that you have no entry in /etc/hosts that contains your hostname (uname -n).
Examples
If your nodename is 'foobar', and you have an entry in /etc/hosts such as this:
127.0.0.1 foobar.example.com foobar localhost.localdomain localhost
Then you will get the following command results:
# hostname
foobar
# uname -n
foobar
# hostname -f
foobar.example.com
# hostname -a
foobar localhost.localdomain localhost
| linux hostname -f command is not working on RHEL |
1,478,993,911,000 |
How can you determine the hostname associated with an IP on the network? (without configuring a reverse DNS)
This was something that I thought was impossible. However I've been using Fing on my mobile. It is capable of finding every device on my network (presumably using an arp-scan) and listing them with a hostname.
For example, this app is capable of finding freshly installed Debian Linux devices plugged into a home router, with no apparent reverse DNS.
As far as I know neither ping, nor Neighbor Discovery, nor arp include a hostname. So how can fing be getting this for a freshly installed Linux PC? What other protocol on a Linux machine would give out the machine's configured hostname?
|
The zeroconf protocol suite (Wikipedia) could provide this information.
The best known implementations are AllJoyn (Windows and others), Bonjour (Apple), Avahi (UNIX/Linux).
Example showing a list of everything on a LAN (in this case not very much):
avahi-browse --all --terminate
+ ens18 IPv6 Canon MG6650 _privet._tcp local
+ ens18 IPv4 Canon MG6650 _privet._tcp local
+ ens18 IPv6 Canon MG6650 Internet Printer local
+ ens18 IPv4 Canon MG6650 Internet Printer local
+ ens18 IPv6 Canon MG6650 UNIX Printer local
+ ens18 IPv4 Canon MG6650 UNIX Printer local
+ ens18 IPv6 Canon MG6650 _scanner._tcp local
+ ens18 IPv4 Canon MG6650 _scanner._tcp local
+ ens18 IPv6 Canon MG6650 _canon-bjnp1._tcp local
+ ens18 IPv4 Canon MG6650 _canon-bjnp1._tcp local
+ ens18 IPv6 Canon MG6650 Web Site local
+ ens18 IPv4 Canon MG6650 Web Site local
+ ens18 IPv6 SERVER _device-info._tcp local
+ ens18 IPv4 SERVER _device-info._tcp local
+ ens18 IPv6 SERVER Microsoft Windows Network local
+ ens18 IPv4 SERVER Microsoft Windows Network local
More specifically, you can use avahi-resolve-address to resolve an address to a name.
Example
avahi-resolve-address 192.168.1.254
192.168.1.254 router.roaima...
| How can you determine the hostname associated with an IP on the network? |
1,478,993,911,000 |
I need to add a check if the hostname is already present in the known_hosts file.
Normally I would do something like that:
ssh-keygen -H -F hostname
However, that does not seem to work for me in this particular case. I connect to the host using port 2102, like that:
ssh user@myhost -p 2102
I was asked to add the hostname to the known_hosts file, I say yes. After that I run ssh-keygen -H -F myhost but receive empty result.
To make the matter worse, the known_hosts is hashed.
That works perfectly with port 22, so if I login to ssh user@myotherhost, save the known host and run ssh-keygen -H -F myotherhost I receive the exact line from the file.
So, how can I adjust the command to work with port 2102?
|
You can use this format: [hostname]:2121, as it is stored in the known_hosts file (note, you need to use the square brackets!):
ssh-keygen -H -F "[hostname]:2121"
Proof of concept (transcript of my minimal test case):
$ echo "[hostname]:2121 ssh-rsa AAA...==" > known_hosts
$ ssh-keygen -Hf known_hosts
known_hosts updated.
Original contents retained as known_hosts.old
WARNING: known_hosts.old contains unhashed entries
Delete this file to ensure privacy of hostnames
$ ssh-keygen -H -F "[hostname]:2121" -f known_hosts
|1|R21497dX9jN052A92GSoVFbuTPM=|lRtIr6O564EaFG0SsIulNAWpcrM= ssh-rsa AAA...==
You might need to use IP address instead of hostname, but it should generally work.
| Check presence of a hostname under custom port in known_hosts |
1,478,993,911,000 |
I'm trying to figure out more ways to see if a given host is up, solely using shell commands (primarily bash). Ideally, it would be able to work with both hostnames and IP addresses. Right now the only native way I know of is ping, perhaps integrated into a script as described here. Any other ideas?
|
Ping is great to get a quick response about whether the host is connected to the network, but it often won't tell you whether the host is alive or not, or whether it's still operating as expected. This is because ping responses are usually handled by the kernel, so even if every application on the system has crashed (e.g. due to a disk failure or running out of memory), you'll often still get ping responses and may assume the machine is operating normally when the situation is quite the opposite.
Checking services
Usually you don't really care whether a host is still online or not, what you really care about is whether the machine is still performing some task. So if you can check the task directly then you'll know the host is both up and that the task is still running.
For a remote host that runs a web server for example, you can do something like this:
# Add the -f option to curl if server errors like HTTP 404 should fail too
if curl -I "http://$TARGET"; then
echo "$TARGET alive and web site is up"
else
echo "$TARGET offline or web server problem"
fi
If it runs SSH and you have keys set up for passwordless login, then you have a few more options, for example:
if ssh "$TARGET" true; then
echo "$TARGET alive and accessible via SSH"
else
echo "$TARGET offline or not accepting SSH logins"
fi
This works by SSH'ing into the host and running the true command and then closing the connection. The ssh command will only return success if that command could be run successfully.
Remote tests via SSH
You can extend this to check for specific processes, such as ensuring that mysqld is running on the machine:
if ssh "$TARGET" bash -c 'ps aux | grep -q mysqld'; then
echo "$TARGET alive and running MySQL"
else
echo "$TARGET offline or MySQL crashed"
fi
Of course in this case you'd be better off running something like monit on the target to ensure the service is kept running, but it's useful in scripts where you only want to perform some task on machine A as long as machine B is ready for it.
This could be something like checking that the target machine has a certain filesystem mounted before performing an rsync to it, so that you don't accidentally fill up its main disk if a secondary filesystem didn't mount for some reason. For example this will make sure that /mnt/raid is mounted on the target machine before continuing.
if ssh "$TARGET" bash -c 'mount | grep -q /mnt/raid'; then
echo "$TARGET alive and filesystem ready to receive data"
else
echo "$TARGET offline or filesystem not mounted"
fi
Services with no client
Sometimes there is no easy way to connect to the service and you just want to see whether it accepts incoming TCP connections, but when you telnet to the target on the port in question it just sits there and doesn't disconnect you, which means doing that in a script would cause it to hang.
While not quite so clean, you can still do this with the help of the timeout and netcat programs. For example this checks to see whether the machine accepts SMB/CIFS connections on TCP port 445, so you can see whether it is running Windows file sharing even if you don't have a password to log in, or the CIFS client tools aren't installed:
# Wait 1 second to connect (-w 1) and if the total time (DNS lookups + connect
# time) reaches 5 seconds, assume the connection was successful and the remote
# host is waiting for us to send data. Connecting on TCP port 445.
if echo 'x' | timeout --preserve-status 5 nc -w 1 "$TARGET" 445; then
echo "$TARGET alive and CIFS service available"
else
echo "$TARGET offline or CIFS unavailable"
fi
| Shell command/script to see if a host is alive? |
1,478,993,911,000 |
In the few years I've been using Linux as my main system, specifically Fedora, I've always seen my hostname set to just "localhost", with the exception of when I connect to some networks and it becomes my IP. Today I experienced the following behavior which I'm having trouble understanding though.
I set up an Ubuntu installation on another partition of my laptop, setting a computer name / hostname during the Ubuntu install. When I rebooted back into Fedora though, Fedora had updated my hostname to the name I set in the Ubuntu install.
I always thought the hostname was configured and stored on the partition of the distro installation, and indeed the contents of /etc/hostname on Fedora still read "localhost.localdomain", but running the hostname command shows the new hostname. Both installs share an efi boot partition, but are otherwise discrete. I'm wondering from where and why the Fedora install is reading the new hostname?
|
The hostnameprogram performs a uname syscall, as can be seen from running:
strace hostname
...
e="Linux", nodename="my.hostname.com", ...}) = 0
...
From the uname syscall man page, it says the syscall retrieves the following struct from the kernel:
struct utsname {
char sysname[]; /* Operating system name (e. */
char nodename[]; /* Name within "some implementation-defined
network" */
char release[ystem release (e.g., "2.6.28") */
char version[]; /* Operating system version */
char machine[]; /* Hardware identifier */
#ifdef _GNU_SOURCE
char domainname[]; /* NIS or YP domain name */
#endif
};
So the domain name comes from the NISystem, if we believe the comment. So more than likely, there may/ YP service on your nhat is trotting back the name to you that is set by the ubuntu OS.
| What determines the Linux hostname? |
1,478,993,911,000 |
We were trying to install our software on an Ubuntu machine. To do so, we needed root privileges. Basically, all we needed to do was run a runnable jar like: sudo java -jar runnableJar.jar.
All such commands would return: Unable to resolve host xxxxx.
The /etc/hosts file had the incorrect hostname listed against the loopback interface which was causing this error. All commands which did not require sudo ran well.
I have been reading up on the loopback interface and my understanding is that it sets up localhost and is a virtual network interface. However, why does sudo need it at all?
|
Since the sudoers file permits the specifying of hostnames in the rules, sudo needs to know what the name of your Ubuntu machine is.
Because of this, sudo collects a list of all interfaces on your Ubuntu machine (loopback and "real"). See relevant section from sudo source code for interfaces.c, at the link below.
http://www.sudo.ws/repos/sudo/file/d8150a3fd577/interfaces.c
| Why does sudo need the loopback interface? |
1,478,993,911,000 |
During Debian installation, I set my hostname to the wrong value, and now I would like to correct that.
|
The hostname is stored in three different files:
/etc/hostname Used as the hostname
/etc/hosts Helps resolving the hostname to an IP address
/etc/mailname Determines the hostname the mail server identifies itself
You might want to have a deeper look with grep -ir hostname /etc
Restarting affected services might be a good idea as well.
| How to change a hostname |
1,478,993,911,000 |
More of a Amazon Web Services EC2 question, hopefully not too off topic. I have a vanilla Ubuntu instance with them, and powered it off. On reboot couldn't ssh to the FQDN because the external IP address had changed.
It costs extra for a "static", or even static, IP address? I would settle for semi-permanent. Elastic sounds about right, in marketing speak.
Or, is this a security measure? I did select to have an external IP address, but it is a free account. The documentation says:
An Elastic IP address is a static IPv4 address designed for dynamic
cloud computing. An Elastic IP address is associated with your AWS
account. With an Elastic IP address, you can mask the failure of an
instance or software by rapidly remapping the address to another
instance in your account.
An Elastic IP address is a public IPv4 address, which is reachable
from the Internet.
The external IP address, the one I used for ssh, is different from what I see when running ifconfig so they're using some form of NAT..as explained here:
Important
You can't manually disassociate the public IP address from your
instance after launch. Instead, it's automatically released in certain
cases, after which you cannot reuse it.
Even though I selected an external IP address, reboot appears (?) to be a circumstance where the IP address is released back into the pool. Please clarify that understanding.
|
The main difference between the two is that:
You will lose your Public IP when you Stop and Start the instance, while the EIP remains linked to the instance even after the Stop/Start operation (or until you don't explicitly detach it from the instance)
Concerning the costs, there's no recurring fee you will pay for the EIP usage, while you keep it attached to a running instance. Otherwise you will have to pay for the resource allocated but not used.
| How is an Elastic IP address different from a static IP address? |
1,478,993,911,000 |
While writing a script, I wanted to reference a machine by the computer name that I gave it (e.g. "selenium-rc"). I could not ping it using "selenium-rc", so I tried the following commands to see if the name was recognized.
> traceroute 192.168.235.41
traceroute to 192.168.235.41 (192.168.235.41), 64 hops max, 52 byte packets
1 selenium-rc (192.168.235.41) 0.545 ms 0.241 ms 0.124 ms
Ok, traceroute "found" the name. How? Next ...
> traceroute selenium-rc
traceroute: unknown host selenium-rc
Hmm ... the lookup mechanism here must be different because the host is unkown. I'm assuming this is using a system name resolution process whereas the first example was using a process specific to traceroute. Correct?
Then when I came back a bit later ...
> traceroute 192.168.235.41
traceroute to 192.168.235.41 (192.168.235.41), 64 hops max, 52 byte packets
1 minint-q4e8i52.mycorp.net (192.168.235.41) 0.509 ms 0.206 ms 0.136 ms
Ok, different result. The "selenium-rc" name did not change on the machine itself, but the traceroute name resolution process must include some sort of priority and now gives a presumably more authoritative result assigned by another system/service on the network. (Unfortunately, I'm assuming it's a dynamic name that I do not control, and thus it would not be useful in a script.)
Can someone explain the results?
|
Generally, in Linux, and Unix, traceroute and ping would both use a call to gethostbyname() to lookup the name of a system. gethostbyname() in turn uses the system configuration files to determine the order in which to query the naming databases, ie: /etc/hosts, and DNS.
In Linux, the default action is (or maybe used to be) to query DNS first, and then /etc/hosts. This can be changed or updated by setting the desired order in /etc/host.conf.
To search /etc/hosts before DNS, set the following order in /etc/host.conf:
order hosts,bind
In Solaris, this same order is controlled via the /etc/nsswitch.conf file, in the entry for the hosts database.
hosts: files dns
Sets the search order to look in /etc/hosts before searching DNS.
Traceroute and ping would both use these methods to search all the configured naming databases. the host and nslookup commands both use only DNS, so they won't necessarily duplicate the seemingly inconsistent results you're seeing.
Solaris has a lookup tool, getent, which can be used to identify hosts or addresses in the same way that traceroute and ping do - by following the configured set of naming databases to search.
getent hosts <hostname>
would search through whatever databases are listed for hosts, in /etc/nsswitch.conf.
So. In your case, to acheive consistent results, add the following to /etc/hosts
192.168.235.41 selenium-rc
And, make sure /etc/host.conf has:
order hosts,bind
Or, make sure that /etc/nsswitch.conf has:
hosts: files dns
Once that's done, you should see more consistent results with both ping, and traceroute, as well as other commands, like ssh, telnet, curl, wget, etc.
| How does traceroute resolve names? |
1,478,993,911,000 |
I know that pwd gives the current working directory, hostname gives the current host and whoami gives the current user. Is there a single unix command that will give me the output of
whoami@hostname:pwd
so that I can quickly paste the output into an scp command?
|
Not a single command as far as I know, but this does what you need:
echo "$(whoami)@$(hostname):$PWD"
You could make that into an alias by adding this line to your shell's rc file (~/.bashrc, or ~/.zshrc or whatever you use):
alias foo='echo "$(whoami)@$(hostname):$PWD"'
| A command that gives username@hostname:pwd |
1,478,993,911,000 |
I have a Centos 5.8 server. How can I add my server IP address 192.168.20.254 so that it reflects when I run the command hostname -i
At the moment the command only shows the loopback IP 127.0.0.1 and the software I am trying to install complains that the server IP is not reflected here.
|
Edit file /etc/hosts and add line:
192.168.20.254 this.is.my.host
Of course instead of this.is.my.host enter proper hostname. You can check it by running hostname without any parameters.
| How to add an IP to hostname file |
1,478,993,911,000 |
What is the difference between uname -n and hostname? Are there any real differences in what they return? Are there any differences in availability on different OSes? Is one of them included in POSIX and the other not?
|
There is no difference. hostname and uname -n output the same information. They both obtain it from the uname() system call.
One difference is that the hostname command can be used to set the hostname as well as getting it. uname cannot do that. (Normally this is done only once, early in the boot process!)
| uname -n vs hostname |
1,478,993,911,000 |
In another question, I found that Puppet was generating certificates for my machine's FQDN but not the simple host name. In that example, kungfumaster was the hostname and was the value retrieved by running hostname. Puppet was generating certificates which specified the FQDN kungfumaster.domain.com.
How did Puppet determine that this was my FQDN? I have tried all of the following and not seen anything matching *.domain.com:
$ hostname -a && hostname -d && hostname --domain && hostname -f && \
hostname --fqdn && hostname -A && hostname --long
kungfumaster
kungfumaster
kungfumaster
kungfumaster
How can I get kungfumaster.domain.com from Bash? I've noticed that domain.com does in fact exist in /etc/resolv.conf, but I haven't been able to find it anywhere else.
I basically want to get the FQDN of the current machine as a string. The other solutions here on unix.se haven't worked for me. (ie: dnsdomainname, domainname, etc.)
|
It appears that under-the-hood, Puppet uses Facter to evaluate the domain names:
$ facter domain
domain.com
$ facter hostname
kungfumaster
$ facter fqdn
kungfumaster.domain.com
The answer is in the relevant Facter source code.
It does the following in order and uses the first one that appears to contain a domain name:
hostname -f
dnsdomainname
parsing resolv.conf for a "domain" or "search" entry
| Determine FQDN when hostname doesn't give it? |
1,478,993,911,000 |
My server is CentOS 7.1. After reboot the hostname is overwritten by the transient hostname (mail) and I can't find a way to avoid that.
Maybe AutoDNS and the MX record mail causes that?
/etc/hostname contains the correct value
hostnamectl --transient set-hostname my.desired.name is working but only until next reboot
So, after reboot:
hostnamectl status shows the correct static hostname but the wrong transient hostname (mail).
hostname -s or hostname -f shows the wrong hostname.
The file /etc/sysconfig/network is overwritten “by anaconda” and has the line HOSTNAME="mail". I tried to edit this file to configure the correct name but it's overwritten after reboot.
How can I prevent the transient hostname being set to mail after restarting?
EDIT:
I already tried to add DHCP_HOSTNAME="my.desired.name" to my /etc/sysconfig/network-scripts/ifcfg-e..... but with no success (line was removed after reboot).
And I tried to add execution of hostnamectl set-hostname "" --transient (which will set the transient to the value of static hostname) at reboot which failed either with activated /etc/rc.local and also as a service with chkconfig on (with # chkconfig: - 11 91 so that it should run after all other services).
Any further suggestions are welcome.
|
Finally I got it.
Our Hosting Provider (Host Europe) has an option in the Controlpanel for each server (virtual root server). On the page "Hostname / RDNS" there is an input field "Hostname:". I changed it to the correct value and now it works as expected.
| Avoid overwriting of hostname by transient hostname at reboot |
1,478,993,911,000 |
I have a firstboot.service that, from a stock OS image creates a unique hostname based on the MAC of the primary ethernet adapter. It runs as expected during boot but the hostname that gets registered with DHCP is still the default hostname as set from the kernel. So after the device boots, I can ping it at defaultname.mynet.lan but when I login and call hostname it displays foo-XXXX as expected.
As you can see below, the service is registered to run before network.target. As you may guess I'm using systemd-networkd and systemd-resolved for networking.
Do I have to do something else to propagate the hostname to running processes?
Can I set the hostname earlier in the boot process, if so what target should I use?
firstboot.service
[Unit]
ConditionPathExists=|!/etc/hostname
Before=network.target
After=local-fs.target
After=sys-subsystem-net-devices-eth0.device
[Service]
Type=oneshot
ExecStart=/bin/bash -c "/usr/local/sbin/firstboot.sh"
RemainAfterExit=yes
[Install]
WantedBy=network.target
firstboot.sh
HOST_PREFIX=${HOST_PREFIX:-"foo"}
NET_DEVICE=${NET_DEVICE:="eth0"}
LAST_MAC4=$(sed -rn "s/^.*([0-9A-F:]{5})$/\1/gi;s/://p" /sys/class/net/${NET_DEVICE}/address)
NEW_HOSTNAME=${HOST_PREFIX}-${LAST_MAC4:-0000}
echo $NEW_HOSTNAME > /etc/hostname
/bin/hostname -F /etc/hostname
|
Nothing ensures that your firstboot.service runs before systemd-networkd is started. You have to use
Wants=network-pre.target
Before=network-pre.target
instead of Before=network.target to achieve that. As man systemd.special explains:
network-pre.target:
This passive target unit may be pulled in by services that want
to run before any network
is set up, for example for the purpose of setting up a firewall.
All network management
software orders itself after this target, but does not pull it in.
You'll also need DefaultDependencies=false to avoid the implicit dependency on basic.target (see man systemd.service).
| Set hostname on first boot before network.service |
1,478,993,911,000 |
I have several log files that contain a bunch of ip addresses. I would love to be able to pipe the data through a program that would match and resolve ip addresses.
I.E.
cat /var/log/somelogfile | host
which would turn a line like
10:45 accessed by 10.13.13.10
into
10:45 accessed by myhostname.intranet
My thought is there might be a way to do this with a combination of sed and host, but I have no idea how to do so. I know that I could write a simple script that would do it, but I would rather be able to use built in tools if possible. Any suggestions?
|
Here's a quick and dirty solution to this in Python. It does caching (including negative caching), but no threading and isn't the fastest thing you've seen. If you save it as something like rdns, you can call it like this:
zcat /var/log/some-file.gz | rdns
# ... or ...
rdns /var/log/some-file /var/log/some-other-file # ...
Running it will annotate the IP addresses with their PTR records in-place:
$ echo "74.125.132.147, 64.34.119.12." | rdns
74.125.132.147 (rdns: wb-in-f147.1e100.net), 64.34.119.12 (rdns: stackoverflow.com).
And here's the source:
#!/usr/bin/env python
import sys, re, socket
cache = dict()
def resolve(x):
key = x.group(0)
try:
return "%s (rdns: %s)" % (key, cache[key])
except KeyError:
try:
cache[key] = socket.gethostbyaddr(key)[0]
except socket.herror:
cache[key] = '?'
return "%s (rdns: %s)" % (key, cache[key])
for f in [open(x) for x in sys.argv[1:]] or [sys.stdin]:
for line in f:
sys.stdout.write(re.sub("\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}", resolve, line))
# End of file.
Please note: this isn't quite what you're after to the letter (using ‘standard tools’). But it probably helps you more than a hack that resolves every IP address every time it's encountered. With a few more lines, you can even make it cache its results persistently, which would help with repeat invocations.
| resolve all ip addresses in command output using standard command line tools |
1,351,783,683,000 |
I'm using top with tmux to monitor the most-CPU-consuming processes on different computers. How can I get the hostname to be displayed in each pane? Can I tell top to display the hostname somehow?
|
As far as I see it, there is no possibility to do it in top. I would suggest using htop. If you open the programm press F2 and move with Tab to the column available meters. You'll find an entry hostname and can place it by pressing F5 or F6 to left or right side.
| show hostname in top |
1,351,783,683,000 |
So I try to do:
ssh $(hostname)
and it tells me:
ssh: Could not resolve hostname woofy: Name or service not known
It knows that its own hostname is "woofy"; why can't it connect to itself?
|
So we know hostname returns woofy. But that name can't be resolved to an IP address.
The short answer is that you need to add an entry for woofy in /etc/hosts. Make it resolve to 127.0.0.1. Or if your system is IPv6-capable, ::1.
Keep a backup of the previous version of /etc/hosts in case you make a mistake (I use the handy etckeeper package for this, but if you prefer it ye olde way, you can use a manual backup or even RCS).
The long answer is that how hostnames are resolved to IP addresses is controlled by a set of configuration files which vary slightly between Unix variants. You can configure your Unix system to resolve host names by hosts file (/etc/hosts will work on almost any Unix system) or by DNS (systems which have direct IP reachability to the Internet will always do this). There are other alternatives too, mostly less widely used (including LDAP and NIS/NIS+). See the Wikipedia article about the Name Service Switch for more context on this.
Edit: if this still causes a DNS lookup, the problem is probably that your Name Service Switch configuration consults DNS before /etc/hosts so the change to /etc/hosts has no effect. Try looking at /etc/nsswitch.conf (how the NSS is configured varies between operating systems).
| Unable to resolve hostname |
1,351,783,683,000 |
In a remote shell, how can I find the domain name of the computer from which I logged into the remote machine?
Example: My local machine is mi.pona.com. On this machine I run
ssh [email protected]
to login into the remote machine sina.pona.com. In the shell which opens (running on the remote machine) I want to find out from which computer I logged in, so I want to get the result "mi.pona.com". Is there a command for this?
|
On my Red Hat 7 machine, I run who am i or who am I or who -m.
The last column will show the machine name where I logged in from (in parenthesis). If I am on my local machine, the last column will show my console/display ID. On my machine it is (:0).
Caveat
This only works on an interactive shell.
ssh ScottieH@RemoteServer who -m will give unexpected results.
On my Red Hat 7 machine, It spews an error.
YMMV
| In a remote shell, how can I find out from which computer I logged into the remote machine? [duplicate] |
1,351,783,683,000 |
I want a host section in my ssh config that matches any local IP:
Host 10.* 192.168.*.* 172.31.* 172.30.* 172.2?.* 172.1?.*
setting
setting
...
This works as long as I connect directly to a relevant IP. If I however connect to a hostname that later resolves to one of these IPs, the section is ignored.
sshd has Match Address sections which I think can be used for this, but they won't work in ssh client configs.
Is there any way to achieve this?
|
You can't do that using only ssh_config options, but there is exec option, which can do that for you:
Match exec "getent hosts %h | grep -qE '^(192\.168|10\.|172\.1[6789]\.|172\.2[0-9]\.|172\.3[01]\.)'"
setting
| ssh_config: Add a host section that matches IPs even when connecting via hostname |
1,351,783,683,000 |
I have a problem when I send a mail to [email protected] for [email protected] My emails has bounced for this error: status=bounced (mail for orbialia.es loops back to myself)
May 9 09:33:58 ns3285243 postfix/smtpd[1606]: connect from localhost.localdomain[127.0.0.1]
May 9 09:33:58 ns3285243 postfix/smtpd[1606]: 1EF6FA1DB9: client=localhost.localdomain[127.0.0.1]
May 9 09:33:58 ns3285243 postfix/cleanup[1584]: 1EF6FA1DB9: message-id=<[email protected]>
May 9 09:33:58 ns3285243 postfix/qmgr[1575]: 1EF6FA1DB9: from=<[email protected]>, size=7184, nrcpt=1 (queue active)
May 9 09:33:58 ns3285243 postfix/smtpd[1606]: disconnect from localhost.localdomain[127.0.0.1]
May 9 09:33:58 ns3285243 amavis[15721]: (15721-16) Passed CLEAN {RelayedInbound}, [79.145.170.251]:1991 [79.145.170.251] <[email protected]> -> <[email protected]>, Queue-ID: 9D0DCA1DB4, Message-ID: <[email protected]>, mail_id: 8JHrgdOkE3Pw, Hits: -0.999, size: 6675, queued_as: 1EF6FA1DB9, 26690 ms
May 9 09:33:58 ns3285243 postfix/smtp[1588]: 9D0DCA1DB4: to=<[email protected]>, relay=127.0.0.1[127.0.0.1]:10024, delay=28, delays=1/0.02/0.01/27, dsn=2.0.0, status=sent (250 2.0.0 from MTA(smtp:[127.0.0.1]:10025): 250 2.0.0 Ok: queued as 1EF6FA1DB9)
May 9 09:33:58 ns3285243 postfix/qmgr[1575]: 9D0DCA1DB4: removed
May 9 09:33:58 ns3285243 postfix/smtp[1607]: 1EF6FA1DB9: to=<[email protected]>, relay=none, delay=0.11, delays=0.09/0.02/0/0, dsn=5.4.6, status=bounced (mail for orbialia.es loops back to myself)
May 9 09:33:58 ns3285243 postfix/cleanup[1584]: 42EC4A1DB4: message-id=<[email protected]>
May 9 09:33:58 ns3285243 postfix/bounce[1608]: 1EF6FA1DB9: sender non-delivery notification: 42EC4A1DB4
May 9 09:33:58 ns3285243 postfix/qmgr[1575]: 42EC4A1DB4: from=<>, size=9465, nrcpt=1 (queue active)
May 9 09:33:58 ns3285243 postfix/qmgr[1575]: 1EF6FA1DB9: removed
May 9 09:33:58 ns3285243 postfix/smtp[1607]: 42EC4A1DB4: to=<[email protected]>, relay=none, delay=0.05, delays=0.04/0/0/0, dsn=5.4.6, status=bounced (mail for orbialia.es loops back to myself)
May 9 09:33:58 ns3285243 postfix/qmgr[1575]: 42EC4A1DB4: removed
This is my postfix config
alias_database = hash:/etc/aliases
alias_maps = hash:/etc/aliases
append_dot_mydomain = no
biff = no
config_directory = /etc/postfix
content_filter = smtp-amavis:[127.0.0.1]:10024
inet_interfaces = all
mailbox_command = procmail -a "$EXTENSION"
mailbox_size_limit = 0
milter_default_action = accept
milter_protocol = 2
mydestination = localhost, ns3285243.ip-5-135-177.eu
myhostname = rentabiliza.net
mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128
myorigin = /etc/mailname
non_smtpd_milters = inet:localhost:8891
policy-spf_time_limit = 3600s
readme_directory = no
recipient_delimiter = +
relayhost =
smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu)
smtpd_milters = inet:localhost:8891
smtpd_recipient_restrictions = permit_sasl_authenticated, permit_mynetworks, reject_unauth_destination check_policy_service unix:private/policy-spf
smtpd_sasl_auth_enable = yes
smtpd_sasl_path = private/auth
smtpd_sasl_type = dovecot
smtpd_tls_auth_only = yes
smtpd_tls_cert_file = /etc/ssl/certs/dovecot.pem
smtpd_tls_key_file = /etc/ssl/private/dovecot.pem
smtpd_use_tls = yes
virtual_alias_domains = mysql:/etc/postfix/sql-domain-aliases.cf
virtual_alias_maps = mysql:/etc/postfix/sql-aliases.cf, mysql:/etc/postfix/sql-domain-aliases-mailboxes.cf, mysql:/etc/postfix/sql-email2email.cf, mysql:/etc/postfix/sql-catchall-aliases.cf
virtual_mailbox_domains = mysql:/etc/postfix/sql-domains.cf
virtual_mailbox_maps = mysql:/etc/postfix/sql-mailboxes.cf
virtual_transport = lmtp:unix:private/dovecot-lmtp
I use virtual domains/users in a mysql database
hostname: ns3285243.ip-5-135-177.eu
hostname -f: ns3285243.ip-5-135-177.eu
/etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
5.135.177.115 ns3285243.ip-5-135-177.eu ns3285243
2001:41D0:8:B873::1 ns3285243.ip-5-135-177.eu ns3285243
# The following lines are desirable for IPv6 capable hosts
#(added automatically by netbase upgrade)
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
When I try to send mail to gmail.com to destination it receives the email successfully.
If I put orbialia.es to mydestination I receive this:
May 9 09:47:28 ns3285243 postfix/smtpd[2601]: NOQUEUE: reject: RCPT from 251.Red-79-145-170.dynamicIP.rima-tde.net[79.145.170.251]: 550 5.1.1 <[email protected]>: Recipient address rejected: User unknown in local recipient table; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<50l3rport>
I have multiple virtual domains. How can I resolve this?
|
From comment:
The problem in this case is that the domain has not been marked as active, so when mysql is queried for active domains this one is not returned.
| Postfix email bounced (mail for domain loops back to myself) |
1,351,783,683,000 |
I configured my machine as the following hostname ( in reshat 7.2 )
digi.master01.usa.com
but my prompt is like this
[root@digi ]#
while we want
[[email protected] ]#
any idea how to change it in linux configuration?
|
In bash you can use two special characters regarding hostname:
\h to get host name up to the first dot
\H to get full host name
If you want anything else you need to make your own version for example with HOSTNAME variable:
[root@digi ]# HOSTNAME=digi.master01.usa.com # this should be set automatically by bash
[root@digi ]# PS1="[\u@${HOSTNAME%.*.*} ]#"
[[email protected] ]#
| Bash prompt takes only the first word of a hostname before the dot |
1,351,783,683,000 |
So I changed the name of my computer recently, by editing /etc/hostname. The bash prompt is user@name, so I know it has worked.
But in emacs, in the window bar it says [email protected]. Tiscali is my internet provider. Before I changed it, emacs just had emacs@oldname in the window bar.
How can I remove my IP's name from the window bar in emacs?
|
The name in /etc/hostname is what your computer thinks it's called. That's often what is meant by hostname.
Computers connected to the Internet (including intranets) have names; more precisely, most Internet network interfaces have a name associated with their IP address. Internet nodes that are not routers generally have a single network interface worth consideration, and the name associated to this interface is called its fully qualified domain name (FQDN). The FQDN is the name you can use to designate your computer from any other Internet node (well, assuming there are no complex configurations involved). Generally, to keep confusion down, the FQDN is something like foo.example.com where foo is the host name.
When you have a single frame open, Emacs uses the value of the system-name variable as the frame title. This variable is set to the FQDN when Emacs starts. You can change it from your .emacs if you wish. This variable isn't used much, so don't worry about changing it. It is used to form a default email address when you send mail or news posts from within Emacs, but you'd almost always override that email address setting anyway.
The frame title format is determined by the frame-title-format variable (unless overridden for a specific frame). You can change it if you'd like to use something other than system-name when there is a single frame. For example, if you want to always see the buffer name in the frame title (as opposed to when there is only one Emacs frame), you can set it to "%b".
| Why is my hostname different in Emacs? |
1,351,783,683,000 |
I'm running an arch system with KDE4/Plasma, wpa_supplicant, networkmanager, systemd ...
# cat /proc/version
Linux version 5.0.0-arch1-1-ARCH (builduser@heftig-18825) (gcc version 8.2.1 20181127 (GCC)) #1 SMP PREEMPT Mon Mar 4 14:11:43 UTC 2019
The content of my /etc/hostname reads localhost.
After boot, the shell command hostname now outputs localhost. More precisely:
# hostnamectl
Static hostname: localhost
Transient hostname: localhost.localdomain
Icon name: computer-laptop
Chassis: laptop
Machine ID: 7e0a101cd2f0406497a6e4354fc9b3b7
Boot ID: a1424a0995da4e84b1e55b7f79df957e
Operating System: Arch Linux
Kernel: Linux 5.0.0-arch1-1-ARCH
Architecture: x86-64
When I turn on WiFi, networkmanager connects to a WiFi network and then the hostname changes. For instance:
# hostnamectl
Static hostname: localhost
Transient hostname: localhost.localdomain
Icon name: computer-laptop
Chassis: laptop
Machine ID: 7e0a101cd2f0406497a6e4354fc9b3b7
Boot ID: a1424a0995da4e84b1e55b7f79df957e
Operating System: Arch Linux
Kernel: Linux 5.0.0-arch1-1-ARCH
Architecture: x86-64
The shell command hostname now outputs localhost.localdomain instead of localhost.
As a consequence, the KDE lock-screen cannot be unlocked and I cannot start any X applications from the terminal in KDE (or any other desktop). A typical error message is this:
$ gvim
Invalid MIT-MAGIC-COOKIE-1 keyE233: cannot open display
When I issue hostnamectl set-hostname localhost as root, the behavior resumes to normal.
In some other WiFis, the hostname after connect is not localhost.localdomain but something even more random (it seems to be a hostname determined by the WiFi provider, mostly in big corporate networks). Why does a WiFi provider have the power to set my hostname?
Can this be changed somehow?
|
Ivanivan's answer (tuning dhcpcd.con), though plausible, didn't work in my case. So I suspect it is not about DHCP. I stumbled upon this post which told me that the problem is not about DHCP but about NetworkManager. Adding the following to /etc/NetworkManager/NetworkManager.conf solved the problem for me:
[main]
plugins=keyfile
hostname-mode=none
See man 5 NetworkManager.conf for details on the hostname-mode option. Setting it to none prevents NetworkManager from setting a transient hostname which is what happened in my case.
| NetworkManager interferes with hostname configuration |
1,351,783,683,000 |
I have a following bash prompt string:
root@LAB-VM-host:~# echo "$PS1"
${debian_chroot:+($debian_chroot)}\u@\h:\w\$
root@LAB-VM-host:~# hostname
LAB-VM-host
root@LAB-VM-host:~#
Now if I change the hostname from LAB-VM-host to VM-host with hostname command, the prompt string for this bash session does not change:
root@LAB-VM-host:~# hostname VM-host
root@LAB-VM-host:~#
Is there a way to update hostname part of bash prompt string for current bash session or does it apply only for new bash sessions?
|
Does Debian really pick up a changed hostname if PS1 is re-exported, as the other answers suggest? If so, you can just refresh it like this:
export PS1="$PS1"
Don't know about debian, but on OS X Mountain Lion this will not have any effect. Neither will the explicit version suggested in other answers (which is exactly equivalent to the above).
Even if this works, the prompt must be reset separately in every running shell. In which case, why not just manually set it to the new hostname? Or just launch a new shell (as a subshell with bash, or replace the running process with exec bash)-- the hostname will be updated.
To automatically track hostname changes in all running shells, set your prompt like this in your .bashrc:
export PS1='\u@$(hostname):\w\$ '
or in your case:
export PS1='${debian_chroot:+($debian_chroot)}\u@$(hostname):\w\$ '
I.e., replace \h in your prompt with $(hostname), and make sure it's enclosed in single quotes. This will execute hostname before every prompt it prints, but so what. It's not going to bring the computer to its knees.
| How to change bash prompt string in current bash session? |
1,351,783,683,000 |
I want to have the FQDN as bash prefix instead of just using the hostname. So I can change
root@web: ~$
to
[email protected]: ~$
I already know that that is possible by using:
PS1="\[\u@$(hostname -f): \w\]\$ "
But that is not persistent - it is always the default hostname when I re-login. So is there a way to make this persistent?
|
Thanks to @dawud and @EsaJokinen comments I found a solution. Replacing
PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ '
with
PS1="\[\u@$(hostname -f): \w\]\$ "
in
/etc/bash.bashrc
does the job on Debian 7
| Persistent Bash Prompt Prefix Linux |
1,351,783,683,000 |
I have tried hostname and ping in a cluster machine, with different outputs. I am wondering what is the difference between the two? For example, on the same machine, hostname outputs node4.XXX and
ping -c 1 $(hostname)
outputs pc333.XXX.
|
The hostname command outputs the hostname of the system from the systems local hostname configuration (could be /etc/hostname or /proc/sys/kernel/hostname or other depending on OS).
The command ping -c 1 <hostname> is going to perform a lookup through the libc resolver (which may or may not be DNS. e.g., /etc/hosts is not DNS) of the <hostname> specified and then perform a reverse DNS lookup of the IP address returned and report that name in the output of the ping command.
As a concrete example, suppose that the local system hostname is fred as specified in /etc/hostname. The hostname command will return 'fred'. The command ping -c 1 fred will perform a DNS lookup of fred (either just fred or fred fully qualified such as fred.domain.com if default domain is domain.com). Assume that DNS returns IP address x.x.x.x. ping will then perform a reverse DNS lookup of IP address x.x.x.x, if no name is returned ping will output the IP address x.x.x.x, otherwise ping will output whatever named was returned from the reverse lookup which could be a different name such as ethel.domain.com.
| Why the difference between network addresses reported by hostname and ping? |
1,351,783,683,000 |
I am using zsh with oh-my-zsh. Unfortunately, oh-my-zsh does not use file ~/.ssh/config for hostname auto-completion (see Issue #1009, for instance).
This could easily archived by the following code:
[ -r ~/.ssh/config ] && _ssh_config=($(cat ~/.ssh/config | sed -ne 's/Host[=\t ]//p')) || _ssh_config=()
zstyle ':completion:*:hosts' hosts $_ssh_config
However, if I add the above commands to my ~/.zshrc file, all other sources for hostnames (like ~/.ssh/known_hosts), which are defined in file ~/.oh-my-zsh/lib/completion.zsh, are overridden.
How can I append new completion rules for ':completion:*:hosts' in my ~/.zshrc file?
|
I think you need to retrieve the existing items and append yours.
zstyle -s ':completion:*:hosts' hosts _ssh_config
[[ -r ~/.ssh/config ]] && _ssh_config+=($(cat ~/.ssh/config | sed -ne 's/Host[=\t ]//p'))
zstyle ':completion:*:hosts' hosts $_ssh_config
| How to append / extend zshell completions? |
1,351,783,683,000 |
I want to use SSH to access a remote machine. However, I do not know how to find the HostName of a machine. I tried using the hostname command, but that only gives the local address of the machine, which (I think) can be same across different machines.
When I try ssh name with name being the hostname returned by the hostname command, I get an error saying that the "hostname is not recognized."
How do I find the complete hostname which I can use to distinguish the target machine?
PS: I am fully authorized to use the target machine.
|
To access a remote machine using ssh, you will need to know either its public host name or its IP address.
You will also need to have either a username and password for an account on the other machine or an SSH private key corresponding to an SSH public key installed on a particular account on the machine you're connecting to.
If you do not know these details, you will have to ask someone who's administrating the other machine for how to best connect with ssh.
The hostname command is exclusively used for setting or displaying the name of the system it's being run on. If you run it locally, it will return the name of your local machine, which may or may not be a fully qualified host name (in either case, it's the name/IP-number of the local machine).
| Finding hostname to run SSH [closed] |
1,351,783,683,000 |
I have a voyage 2.6.38 machine running DNSMASQ for a DHCP server and I would like to get the hostnames of the clients that acquire DHCP leases. How would I go about doing this?
|
If the host sends its name you can retrieve it from DNS. If you know its IP address you just do a reverse lookup on the IP address. One of these commands should work (use the host's IP address in place of 192.0.32.10):
host 192.0.32.10
nslookup 192.0.32.10
You can retrieve a list of all leases including the name provided if any from your dhcp.leases file. Its location will vary depending on the distribution you use. Ubuntu uses /var/lib/misc/dnsmasq.leases while OpenWrt uses /tmp/dhcp.leases. If you have a man page for dnsmasq, then the command man dnsmasq should mention the location of leases file at the end of the document. You can override this location by specifying the dhcp-leasefile option in your configuration or command line. The command line options -l or --dhcp-leasfile= options can be used to do this.
The fields in the leasefile are timestamp, mac address, ip address, hostname, and client id. The client is not required to send a hostname or client id.
If logging has been enabled, you can look at the syslog to see which leases have been negotiated. All DHCP negotiations should be logged. If you have long lease times, the negotiations will be not be frequent. Clients should start negotiating a renewal at half the lease time. It is best to set the lease time at least twice the period you can reasonably expect your DHCP server to be down.
| Get client hostnames from DHCP |
1,351,783,683,000 |
What is the right thing to set a computer's hostname to? (/bin/hostname, /etc/hostname, sethostname(2), etc.)
I usually just use the "name" part, and it works perfectly (for me at least), but I've seen it implied in some places that I should use the entire "name.domain.tld" instead... which just doesn't feel "right" for me.
|
Just the name as you already do. A hostname shouldn't contain any domainname.
| `hostname` - host name or FQDN? |
1,351,783,683,000 |
How to prevent dhcpcd from setting hostname got from server? Change of it breaks a lot of things (including X session).
My current distribution is Gentoo, init system is systemd and dhcpcd is spawn by networkmanager.
|
From n.m.'s link - the solution is described on NM webpage under 'Persistent Hostname'. One need to add to /etc/NetworkManager/NetworkManager.conf:
[main]
plugins=keyfile
[keyfile]
hostname=deepspace9
| Prevent dhcpcd from setting hostname |
1,351,783,683,000 |
On my Fedora 19 system, I am able to change the system hostname with hostnamectl. This allows me to set several things, such as the static (normal) hostname, as well as a "pretty" hostname.
Is there a simple command that retrieves the pretty hostname, from a bash prompt?
hostname returns the static hostname, and the man page shows no options to recover the pretty one.
|
As per man hostnamectl:
The static host name is stored in /etc/hostname, see hostname(5) for more information. The pretty host name,
chassis type and icon name are stored in /etc/machine-info, see machine-id(5).
Therefore, if you have set a pretty hostname using the command
hostnamectl set-hostname --pretty YourHostname
you can retrieve it using a tool like awk:
awk -F= '/PRETTY/ {print $2}' /etc/machine-info
| Obtain the "pretty" hostname in bash |
1,351,783,683,000 |
In my local wifi, I can find out the IP and MAC of another computer which also runs Lubuntu, and whose hostname is known to me.
$ sudo arp-scan olive
[sudo] password for t:
Interface: wlx801f02b5c389, datalink type: EN10MB (Ethernet)
Starting arp-scan 1.9 with 1 hosts (http://www.nta-monitor.com/tools/arp-scan/)
192.168.1.198 aa:bb:cc:dd:ee:ff Liteon Technology Corporation
1 packets received by filter, 0 packets dropped by kernel
Ending arp-scan 1.9: 1 hosts scanned in 1.449 seconds (0.69 hosts/sec). 1 responded
There are other computers in the same local wifi network, which most likely run Windows and whose hostnames I don't know.
Can I find out their hostnames from my computer? arp-scan -l doesn't show that information.
Thanks.
|
Install nmap for whatever flavour of Linux you are on.
Then perform:
nmap -sP xxx.xxx.xxx.xxx/nn
Where xxx.xxx.xxx.xxx/nn is your IP and subnet mask bits. For example, if you were on 192.168.1.0 network with a subnet mask of 255.255.255.0 you would do:
nmap -sP 192.168.1.0/24
Nmap will ping scan all IPs and try to resolve their hostnames for you.
For example, on my local LAN I see:
nmap -sP 192.168.169.0/24
Starting Nmap 7.60 ( https://nmap.org ) at 2019-02-23 20:23 CST
Nmap scan report for _gateway (192.168.169.1)
Host is up (0.00026s latency).
Nmap scan report for grid (192.168.169.6)
Host is up (0.00026s latency).
Nmap scan report for anode (192.168.169.8)
Host is up (0.000048s latency).
Nmap scan report for 192.168.169.100
Host is up (0.0027s latency).
Nmap scan report for 192.168.169.101
Host is up (0.049s latency).
Nmap scan report for 192.168.169.102
Host is up (0.092s latency).
Nmap scan report for 192.168.169.104
Host is up (0.012s latency).
Nmap scan report for 192.168.169.106
Host is up (0.055s latency).
Nmap scan report for 192.168.169.107
Host is up (0.12s latency).
Nmap scan report for 192.168.169.109
Host is up (0.00095s latency).
Nmap scan report for 192.168.169.250
Host is up (0.0024s latency).
Nmap done: 256 IP addresses (11 hosts up) scanned in 3.79 seconds
The hosts with no names are dhcp and do not get hostnames. The others are in DNS, and so they have hostnames.
Similarly, arp-scan should give you the mac address. Again, install arp-scan for your distribution, then do:
arp-scan 192.168.1.0/24
Assuming again you are on the same LAN as the example above. If you have a non-standard interface, you may specify it with the -I switch:
arp-scan -I enp4s2 192.168.1.0/24
Note that on most distros, you will need to be root for this, so (again, example on my local LAN):
sudo arp-scan 192.168.169.0/24
This discovered all of my nodes:
Interface: enp5s0, datalink type: EN10MB (Ethernet)
Starting arp-scan 1.9 with 256 hosts (http://www.nta-monitor.com/tools/arp-scan/)
192.168.169.6 74:d4:35:85:e0:44 GIGA-BYTE TECHNOLOGY CO.,LTD.
192.168.169.1 f4:f2:6d:70:16:c2 (Unknown)
192.168.169.100 6c:70:9f:d0:ff:1a (Unknown)
192.168.169.101 10:9a:dd:80:f4:93 Apple, Inc.
192.168.169.104 08:02:8e:8e:a0:f6 (Unknown)
192.168.169.111 00:1c:c0:6e:f4:ec Intel Corporate
192.168.169.106 fc:c2:de:4c:58:48 (Unknown)
192.168.169.102 b4:f6:1c:f2:f9:52 (Unknown)
192.168.169.103 dc:68:eb:5b:aa:c8 (Unknown)
192.168.169.250 00:01:e6:a2:3f:17 Hewlett-Packard Company
192.168.169.107 68:37:e9:d7:39:0b (Unknown)
192.168.169.105 08:d4:6a:d1:df:5e (Unknown)
13 packets received by filter, 0 packets dropped by kernel
Ending arp-scan 1.9: 256 hosts scanned in 2.438 seconds (105.00 hosts/sec). 13 responded
| How can I find out the hostnames of other computers in the same local network? |
1,351,783,683,000 |
I was looking at bind9-host
shirish@debian:"04 Jan 2020 15:48:02" ~$ aptitude show bind9-host=1:9.11.5.P4+dfsg-5.1+b1
Package: bind9-host
Version: 1:9.11.5.P4+dfsg-5.1+b1
State: installed
Automatically installed: no
Priority: standard
Section: net
Maintainer: Debian DNS Team <[email protected]>
Architecture: amd64
Uncompressed Size: 369 k
Compressed Size: 271 k
Filename: pool/main/b/bind9/bind9-host_9.11.5.P4+dfsg-5.1+b1_amd64.deb
Checksum-FileSize: 271156
MD5Sum: 8cd326a23a51acdb773df5b7dce76060
SHA256: 977287c7212e9d3e671b85fdd04734b4908fe86d4b3581e47fb86d8b27cfdb3b
Archive: testing
Depends: libbind9-161 (= 1:9.11.5.P4+dfsg-5.1+b1), libdns1104 (= 1:9.11.5.P4+dfsg-5.1+b1), libisc1100 (= 1:9.11.5.P4+dfsg-5.1+b1), libisccfg163 (= 1:9.11.5.P4+dfsg-5.1+b1), liblwres161 (= 1:9.11.5.P4+dfsg-5.1+b1), libc6 (>= 2.14), libcap2 (>= 1:2.10), libcom-err2 (>= 1.43.9), libfstrm0 (>= 0.2.0), libgeoip1, libgssapi-krb5-2 (>= 1.6.dfsg.2), libidn2-0 (>= 2.0.0), libjson-c4 (>= 0.13.1), libk5crypto3 (>= 1.6.dfsg.2), libkrb5-3 (>= 1.6.dfsg.2), liblmdb0 (>= 0.9.6), libprotobuf-c1 (>= 1.0.0), libssl1.1 (>= 1.1.0), libxml2 (>= 2.6.27)
Provides: host
Description: DNS lookup utility (deprecated)
This package provides /usr/bin/host, a simple utility (bundled with the BIND 9.X sources) which can be used for converting domain names to IP addresses and the reverse.
This utility is deprecated, use dig or delv from the dnsutils package.
Homepage: https://www.isc.org/downloads/bind/
What is and was interesting to me is that while the utility itself is being deprecated and has numerous issues with the program itself, the utility still seems to be there in keeping it but don't see it why? I also don't see any deprecation notice or documentation in /usr/share/doc/bind9-host . There is usually a NEWS.gz which gives this info. but in this package there isn't one. Changelog.gz and others don't have.
Interestingly, they continue to do it -
$ apt-cache policy bind9-host
bind9-host:
Installed: 1:9.11.5.P4+dfsg-5.1+b1
Candidate: 1:9.11.5.P4+dfsg-5.1+b1
Version table:
1:9.15.7-1 100
100 http://cdn-fastly.deb.debian.org/debian experimental/main amd64 Packages
1:9.11.14+dfsg-1 100
100 http://cdn-fastly.deb.debian.org/debian unstable/main amd64 Packages
*** 1:9.11.5.P4+dfsg-5.1+b1 900
900 http://cdn-fastly.deb.debian.org/debian testing/main amd64 Packages
100 /var/lib/dpkg/status
|
host is not deprecated by Internet Systems Consortium, the BIND company. It does not even deprecate nslookup as it once did.
This deprecation of host was done in 2018 by a Debian Developer, on xyr own initiative, in response to a 2013 Debian bug report about the package description that did not actually mention deprecation. The Debian package description the only place where deprecation is mentioned, and there is no rationale for it.
If one were going to deprecate ISC tools — again — there is a far more obvious better place to start.
As a Debian user, you might like to submit a bug report about this deprecation.
Further reading
Justin B Rye (2013-11-14). bind9-host: unhelpful package description. Debian bug #729561.
Bernhard Schmidt (2018-03-22). Update bind9-host description. salsa.debian.org.
Mark Andrews (2004-08-19). 1700. [func] nslookup is no longer to be treated as deprecated.. gitlab.isc.org.
Jonathan de Boyne Pollard (2001). nslookup is a badly flawed tool. Don't use it.. Frequently Given Answers.
https://unix.stackexchange.com/a/446293/5132
| why host from bind9-host is/was deprecated and when? |
1,351,783,683,000 |
I need to change the hostname of a system without rebooting it. I'm running CentOS 7 and have the correct hostname in the /etc/hostname file but I'm still showing the old hostname at prompt. I know that when I reboot the system it will check the hostname file and apply it, but is there anyway for me to update that without rebooting? Here is some info from the command line:
[root@gandalf sysconfig]# cat network
NETWORKING=yes
GATEWAY=192.168.80.1
HOSTNAME="sauron.domain.com"
[root@gandalf sysconfig]# cd ..
[root@gandalf etc]# cat hostname
sauron
[root@gandalf etc]#
I'm unable to reboot this server anytime soon and some of my team is mixing up the server due to the hostname showing an older system name. Simply put: need prompt to show [user@sauron dir]# instead of [user@gandalf dir]#.
Googled around for this but wasn't able to see a way to do this without the reboot. Thanks for your consideration!
|
You should be able to do this using the hostname command:
hostname -F /etc/hostname
After this change, the previous hostname will still show at your current prompt. To see the change without rebooting, enter a new shell. If you are using bash, type:
bash
Your new hostname should now be displayed.
| Need to force hostname update without restart |
1,351,783,683,000 |
I have a machine that I can only access using SSH.
I was messing with the hostnames, and now it says:
ssh: unable to resolve hostname
I know how to fix it in /etc/hosts.
Problem is, I need sudo to fix them because my normal account doesn't have permissions.
What's the best way to fix the hosts?
|
You don't need sudo to fix that, try pkexec,
pkexec nano /etc/hosts
pkexec nano /etc/hostname
After running pkexec nano /etc/hosts, add your new hostname in the line that starts with 127.0.1.1 like below,
127.0.0.1 localhost
127.0.1.1 your-hostname
And also don't forget to add your hostname inside /etc/hostname file after running pkexec nano /etc/hostname command,
your-hostname
Restart your PC. Now it works.
| How to edit /etc/hosts without sudo? |
1,351,783,683,000 |
Sorry my knowledge of Linux bash commands is pretty basic, I've been searching for a while but I'm not 100% sure what I need to search for.
I was wondering if there's a way to grab the current logged in users remote host name in a Linux bash script? I have a script in which I need to log each time a user runs it. I'm obtaining the date like so:
cdat=`/bin/date +%a' '%d' '%h' '%Y', '%H':'%M`;
I now need to add the users remote host (not the username they logged in with). I'm not 100% sure I'm using the right terminology here either, just to clarify; by 'remote host name' I mean the same output that prints on the screen on most servers I've logged into over ssh, for example:
Last login: Mon Jul 22 16:35:09 2013 from win7-i7-stuart.my.domain.com
I'm looking for the win7-i7-stuart.my.domain.com bit.
|
Better to use the command who am i so that you don't get the duplicate info and have to parse it when using just a plain who.
$ who am i
sam pts/6 2013-07-22 13:21 (192.168.1.110)
Humourously you can also use this:
$ who mom likes
sam pts/6 2013-07-22 13:21 (192.168.1.110)
You can parse it using sed so that it's just the host they're connecting from:
$ who am i | sed 's/.*(\(.*\))/\1/'
192.168.1.110
You can also see the entire history of a user's logins using the last command:
$ last <username>
For example:
$ last sam | less
sam pts/6 192.168.1.110 Mon Jul 22 13:21 still logged in
sam pts/6 192.168.1.110 Mon Jul 22 11:02 - 11:02 (00:00)
sam pts/5 192.168.1.110 Thu Jul 18 14:41 - 16:41 (01:59)
sam pts/5 192.168.1.110 Wed Jul 17 15:56 - 16:28 (00:31)
sam pts/5 192.168.1.110 Wed Jul 17 15:56 - 15:56 (00:00)
sam pts/4 192.168.1.110 Wed Jul 17 14:28 - 14:29 (00:00)
sam pts/7 192.168.1.110 Tue Jul 16 16:27 - 16:50 (00:23)
References
4 Ways to Identify Who is Logged-In on Your Linux System
last man page
w man page
| Obtaining remote host name in bash script |
1,351,783,683,000 |
I have this weird problem. I need to access a website that contains an underscore from Linux. The hostname is invalid and Linux does treat it as such.
The problem is that access from Windows seems to work just fine and therefore the admin won't fix it.
Is there any way to access such site?
EDIT:
The hostname is: _nyx.isthereanydeal.com
host works
ping and browsers don't work, I have tried both un-encoded and encoded versions of the hostname
|
Since you can find the IP with host, add an entry to your /etc/hosts file with a new name to the same IP.
/etc/hosts
198.100.51.37 u_nyx.isthereanydeal.com
Then:
$ ping -c 1 u_nyx.isthereanydeal.com
PING u_nyx.isthereanydeal.com (198.100.51.37): 56 data bytes
64 bytes from 198.100.51.37: icmp_seq=0 ttl=48 time=69.157 ms
--- u_nyx.isthereanydeal.com ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 69.157/69.157/69.157/0.000 ms
| Is there any way to access a hostname containing an underscore? |
1,351,783,683,000 |
In order to have an alias for my server, which can be seen in a "hostname -a" command, I edited the /etc/hosts file to add the alias at the end of the entry containing the hostname.
For example, my hostname is host1 and I want to have alias hostalias, I have below entry in /etc/hosts:
192.168.0.1 host1 hostalias
With this change I am able to use "hostname -a" to see hostalias.
However, I can change it only once! If I edit the file /etc/hosts again to something like this:
192.168.0.1 host1 hostalias2
the output of "hostname -a" is still hostalias.
Even after I remove the hostalias2 and reboot the server, it's still saying hostalias.
BUT, if I change the alias the first time after reboot, it takes effect.
So in fact I have two questions:
Where is the hostname alias persistent, if not /etc/hosts (so it can survive a reboot).
Why can it only be changed once per boot?
More information: It is a RHEL 6.2 server.
|
@StephaneChazelas is right in this comment.
Possibly you have a name service cache daemon. Try after sudo nscd -i hosts (to invalidate the host cache).
I can't make a comment the answer of a question so I answer this question myself.
| in which file is hostname alias persistent, if not /etc/hosts? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.