date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,391,839,012,000 |
i recently installed Debian 10 Buster with KDE Plasma Desktop to my laptop. When i logon, system starts up the apps while they were open before shutdown.
I edited the file /etc/initramfs-tools/conf.d/resume
as RESUME=none and ran sudo update-initramfs, but Debian still resuming from disk when i restarted my computer.
Also i have no kernel parameter like resume=UUIDblabla in /etc/default/grub file.
|
It seems like this is a feature of KDE. It'll restart applications that were previously open in the last session.
Go to:
System Settings > Startup and Shutdown > Session Management
and select something other than "Restore previous session".
You'll probably want to re-enable hibernation.
| Disabling Hibernation Boot on Debian Buster |
1,391,839,012,000 |
I have a Dell Inspiron laptop with Gentoo and OpenRC. It has no trouble entering hibernation and suspend via gui or keyboard. It also restores normally.
I have configured xfce4-power-manager to put it to sleep and hibernation with lid event. And here comes the most fascinating thing: it wakes up as if normal, but its screen is black. It is completely black, it was simply not turned on upon waking up.
It looks like it is fine suspending and waking up from keyboard sleep button or via gui and in this case it properly switches the screen on, but with lid - it for some reason does not switch the screen on.
When this happens I can attach a monitor via HDMI and from it manually enable laptop screen, just like in this question. In my case however this happens only if triggering syspend/hibernate via lid.
Inspecting the logs I see no difference suspending this system via lid or keyboard. From the logs' perspective in both situations resume is happenning identical and system is similarly operational, apart from disabled screen for the case with the lid.
I have tried making ACPI Video built-in, Brightness modules built in, tried completely disabling EFI FB from kernel, but nothing helps. Also tried to enable the screen with brighness keys and "monitor select" Fn-key, also without any luck. Tried installing vbetool. This is not only helpless in solving this, but also breaks suspend and hibernation completely.
Given the observations, it looks like something (hardware or software) is supposed to turn the screen on, but with lid event it does not work out properly or timely.
Given the resume from lid event, what is responsible for enabling the screen? How is the time of this enabling controlled? Am I missing some kernel module, that is resonsible for lid? For video? For screen turning on?
Given that I can later manually enable the screen via HDMI, can I add this action to resume script? How do I do that? What could the command "Enable the embedded screen/ Screen:0" be like?
|
More detailed answer follows for anyone interested
My failing kernel was for some reason lacking CONFIG_DRM_FBDEV_EMULATION=y
TL;DR end
Suddenly I have remembered that I once had genkernel kernel with initramfs, that probably worked fine.
It used the config from LiveCD, so I got that config and compiled the kernel with this config. Guess what? The issue was gone!
So I've got 2 configs, one of which works and the other does not. Firstly I thought it is because of initramfs, but it was really quick to recompile the working config without initramfs to be sure, that it is failing not because of initramfs. Next step was to diff the both configs and try to identify the setting, which affects my system. Well, easier said than done: diffing gave me overwhelmingly huge amount of differences to figure it out.
I had no other choice other than bit by bit morphing the working config into non-working, searching for the setting needed. It took quite a loooooot of tries, but eventually it was a success!
| Black screen after resume from syspend/hibernate on Dell Inspiron |
1,301,653,026,000 |
How can I get the /etc/hosts file to refer to another configuration file for it's list of hosts?
Example /etc/hosts:
## My Hosts
127.0.0.1 localhost
255.255.255.255 broadcasthost
#Other Configurations
<Link to /myPath/to/MyConfig/ConfigFile.txt>
#Other Addresses
3.3.3.3 MyAwesomeDomain.com
4.4.4.4 SomeplaceIWantToGoTo.com
ConfigFile.txt
##My additional Hosts
1.1.1.1 SomeLocation.com
2.2.2.2 AnotherLocation.com
How do I add a link/Reference to /etc/hosts file such that ConfigFile.txt will be loaded?
|
You can't. The format for /etc/hosts is quite simple, and doesn't support including extra files.
There are a couple approaches you could use instead:
Set up a (possibly local-only) DNS server. Some of these give a lot of flexibility, and you can definitely spread your host files over multiple files, or even machines. Others (such as dnsmasq) offer less (but still sufficient) flexibility, but are easy to set up. If you're trying to include the same list of hosts on a bunch of machines, then DNS is probably the right answer.
Set up some other name service (NIS, LDAP, etc.). Check the glibc NSS docs for what is supported. Personally, I think you should use DNS in most all cases.
Make yourself an /etc/hosts.d directory or similar, and write some scripts to concatenate them all together (most trivial: cat /etc/hosts.d/*.conf > /etc/hosts, though you'll probably want a little better to e.g., override the default sort by current locale), and run that script at boot, or from cron, or manually whenever you update the files.
Personally, at both home and work, to have machine names resolvable from every device, I run BIND 9. That does involve a few hours to learn, though.
| /etc/hosts file refer to another configuration file |
1,301,653,026,000 |
As I understand it, the hosts file is one of several system facilities that assists in addressing network nodes in a computer network. But what should be inside it? When I install Ubuntu by default
127.0.0.1 localhost
will be there. Why?
How does /etc/hosts work in case of JVM systems like Cassandra?
When is DNS alternative, I guess not on a single computer?
|
The file /etc/hosts started in the old days of DARPA as the resolution file for all the hosts connected to the internet (before DNS existed). It has the maximum priority, meaning this file is preferred ahead of any other name system.1
However, as a single file, it doesn't scale well: the size of the file becomes too big very soon. That is why the DNS system was developed, a hierarchical distributed name system. It allows any host to find the numerical address of some other host efficiently.
The very old concept of the /etc/hosts file is very simple, just an address and a host name:
127.0.0.1 localhost
for each line. That is a simple list of pairs of address-host.2
Its primary present-day use is to bypass DNS resolution. A match found in the /etc/hosts file will be used before any DNS entry. In fact, if the name searched (like localhost) is found in the file, no DNS resolution will be performed at all.
1 Well, the order of name resolution is actually defined in /etc/nsswitch.conf, which usually has this entry:
hosts: files dns
which means "try files (/etc/hosts); and if it fails, try DNS."
But that order could be changed or expanded.
2 (in present days) The hosts file contains lines of text consisting of an IP address in the first text field followed by one or more host names. Each field is separated by white space – tabs are often preferred for historical reasons, but spaces are also used. Comment lines may be included; they are indicated by an octothorpe (#) in the first position of such lines. Entirely blank lines in the file are ignored. For example, a typical hosts file may contain the following:
127.0.0.1 localhost loopback
::1 localhost localhost6 ipv6-localhost ipv6-loopback mycomputer.local
192.168.0.8 mycomputer.lan
10.0.0.27 mycomputer.lan
This example contains entries for the loopback addresses of the system and their host names, the first line is a typical default content of the hosts file. The second line has several additional (probably only valid in local systems) names. The example illustrates that an IP address may have multiple host names (localhost and loopback), and that a host name may be mapped to both IPv4 and IPv6 IP addresses, as shown on the first and second lines respectively. One name (mycomputer.lan) may resolve to several addresses (192.168.0.8 10.0.0.27). However, in that case, which one is used depends on the routes (and their priorities) set for the computer.
Some older OSes had no way to report a list of addresses for a given name.
| What is the purpose of /etc/hosts? |
1,301,653,026,000 |
How can I determine or set the size limit of /etc/hosts? How many lines can it have?
|
Problematical effects include slow hostname resolution (unless the OS somehow converts the linear list into a faster-to-search structure?) and the potential for surprising interaction with shell tab completion well before any meaningful file size is reached.
For example! If one places 500,000 host entries in /etc/hosts
# perl -E 'for (1..500000) { say "127.0.0.10 $_.science" }' >> /etc/hosts
for science, the default hostname tab completion in ZSH takes about ~25 seconds on my system to return a completion prompt (granted, this is on a laptop from 2009 with a 5400 RPM disk, but still).
| What is the /etc/hosts size limit? |
1,301,653,026,000 |
I have a domain setup to point to my LAN's external IP using dynamic DNS, because my external IP address changes frequently. However, I want to create an alias to this host, so I can access it with home. So I appended the following to my /etc/hosts:
example.com home
However, it doesn’t seem to like the domain name. If I change it to an IP:
0.0.0.0 home
then it works, but of course this defeats the purpose of dynamic DNS!
Is this possible?
|
The file /etc/hosts contains IP addresses and host names only. You cannot alias the string "home" in the way that you want by this method.
If you were running your own DNS server you'd be able to add a CNAME record to make home.example.com an alias for domain.example, but otherwise you're out of luck.
The best thing you could do is use the same DNS client to update a fully-qualified name.
| Creating alias to domain name with /etc/hosts |
1,301,653,026,000 |
For a while I have been formatting my hosts file like this. Notice the same ip on two lines:
e.f.g.h foo.mydevsite.com
e.f.g.h foo.myOtherDevSite.com
I read recently that aliases are supposed to be consolidated on one line:
e.f.g.h foo.mydevsite.com foo.myOtherDevSite.com
However, I don't like this method because you can't easily comment out certain aliases or add comments to particular aliases, like this:
a.b.c.d foo.mydevsite.com # myDevSite on box 1
# a.b.c.d foo.myOtherSite.com # myOtherSite on box 1
a.b.c.d ubuntuBox
e.f.g.h foo.myOtherSite.com # myOtherSite testing environment
So far this has been working fine; is there a problem with this?
|
I found this thread that discusses doing something along these lines. The thread is pretty adamant about not having multiple lines line the /etc/hosts file.
excerpt - Re: /etc/hosts: Two lines with the same IP address?
No, it will not. The resolvers stop at the first resolution. Having
something like:
127.0.0.1 localhost.localdomain localhost
127.0.0.1 somenode.somedom.com somenode
Will not do what you are talking about. BUT having:
127.0.0.1 somenode.somedom.com somenode
127.0.0.1 localhost.localdomain localhost
Will cause all kinds of havoc. Including forwarding.
I would generally not do what you're attempting. If you need more evidence the man page even says not to do this:
excerpt man hosts
This manual page describes the format of the /etc/hosts file. This file is a simple text file that associates IP addresses with hostnames, one line per IP address. For each host a single line should be present with the following information:
IP_address canonical_hostname [aliases...]
All this being said, if your hostnames are FQDN and they don't overlap then you're probably safe to do what you're doing. Just keep in mind that if there is any overlap such as what was mentioned in the thread above, then you may run into resolving issues.
| Hosts file: Is it incorrect to have the same IP address on multiple lines? |
1,301,653,026,000 |
Pasted below this question is a sample of a /etc/hosts file from a Linux (CentOS) and a Windows machine. The Linux file has two tabbed entries after the IP address (that is localhost.localdomain localhost) and Windows has only one. If I want to edit the hosts file in Windows to have the machine name (etest) instead of localhost, I simply replace the word localhost with the machine name I want. The machine need not be part of a domain.
In a Linux machine, the two entries localhost.localdomain and localhost seems to indicate that I will need the machine to be part of a domain. Is this true?
Can I simply edit both entries to etest so that it will read:
127.0.0.1 etest etest
or is it required that I substitute one entry with a domain name?
Additionally, please let me know what the second line of the /etc/hosts file on the Linux machine is for.
::1 localhost6.localdomain6 localhost6
hosts file on a Linux machine:
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
hosts file on a windows machine:
# Copyright (c) 1993-1999 Microsoft Corp.
#
# This is a sample HOSTS file used by Microsoft TCP/IP for Windows.
#
# This file contains the mappings of IP addresses to host names. Each
# entry should be kept on an individual line. The IP address should
# be placed in the first column followed by the corresponding host name.
# The IP address and the host name should be separated by at least one
# space.
#
# Additionally, comments (such as these) may be inserted on individual
# lines or following the machine name denoted by a '#' symbol.
#
# For example:
#
# 102.54.94.97 rhino.acme.com # source server
# 38.25.63.10 x.acme.com # x client host
127.0.0.1 localhost
|
You always want the 127.0.0.1 address to resolve first to localhost. If there is a domain you can use that too, but then make sure localhost is listed second. If you want to add aliases for your machine that will lookup to the loopback address you can keep adding them as space separated values on that line. Specifying a domain here is optional, but don't remove "localhost" from the options.
| Format of /etc/hosts on Linux (different from Windows?) |
1,301,653,026,000 |
There's a website, www.example.com, that I tried to block myself from accessing because it wastes too much of my time. So I configured my /etc/hosts file. I added the following lines, to block the website on both IPv4 and IPv6:
127.0.0.1 www.example.com
::1 www.example.com
127.0.0.1 http://www.example.com
::1 http://www.example.com
127.0.0.1 example.com
::1 example.com
I restarted my computer, and I cannot wget www.example.com, and pinging www.example.com works as expected, but the website is not actually blocked in my browser! I can still access it in Firefox 28 and Chromium.
Questions
What's going on?
How do I block this site using systems-level tools instead of using browser extensions?
|
Rather then make this block using /etc/hosts I'd suggest using a browser addon/plugin such as this one named: BlockSite for Firefox or StayFocusd for Chrome.
BlockSite
StayFocusd
But I want to really use /etc/hosts file
If you must do it this way you can try adding your entries like this instead:
0.0.0.0 www.example.com
0.0.0.0 example.com
::0 www.example.com
::0 example.com
You should never add entries to this file other than hostnames. So don't put any entries in there that include prefixes such as http:// etc.
| Blocking Websites with /etc/hosts |
1,301,653,026,000 |
Utilities like host and dig let you see the IP address corresponding to the host name.
There is also the getent utility that can be used to query /etc/hosts or other NSS databases.
I am looking for a convenient standard utility (which is available in Debian, say) which resolves a host name regardless of where it is defined.
It should be more or less equivalent to
ping "$HOST" | head -1 | perl -lne '/\((.*?)\)/ && print $1'
|
The only command that I am aware that does what you want is resolveip:
http://linux.die.net/man/1/resolveip
However it only comes with mysql-server, which may not be ideal to install everywhere.
| Host lookup that respects /etc/hosts |
1,301,653,026,000 |
What is the practical usage of /etc/networks file? As I understand, one can give names to networks in this file. For example:
root@fw-test:~# cat /etc/networks
default 0.0.0.0
loopback 127.0.0.0
link-local 169.254.0.0
google-dns 8.8.4.4
root@fw-test:~#
However, if I try to use this network name for example in ip utility, it does not work:
root@fw-test:~# ip route add google-dns via 104.236.63.1 dev eth0
Error: an inet prefix is expected rather than "google-dns".
root@fw-test:~# ip route add 8.8.4.4 via 104.236.64.1 dev eth0
root@fw-test:~#
What is the practical usage of /etc/networks file?
|
As written in the manual page, the /etc/networks file is to describe symbolic names for networks. With network, it is meant the network address with tailing .0 at the end. Only simple Class A, B or C networks are supported.
In your example the google-dns entry is wrong. It's not a A,B or C network. It's an ip-address-hostname-relationship therefore it belongs to /etc/hosts. Actually the default entry is also not conformant.
Lets imagine you have an ip address 192.168.1.5 from your corporate network. An entry in /etc/network could then be:
corpname 192.168.1.0
When using utilities like route or netstat, those networks are translated (if you don't suppress resolution with the -n flag). A routing table could then look like:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 192.168.1.1 0.0.0.0 UG 0 0 0 eth0
corpname * 255.255.255.0 U 0 0 0 eth0
| practical usage of /etc/networks file |
1,301,653,026,000 |
In Linux, how do /etc/hosts and DNS work together to resolve hostnames to IP addresses?
if a hostname can be resolved in /etc/hosts, does DNS apply after /etc/hosts
to resolve the hostname or treat the resolved IP address by
/etc/hosts as a "hostname" to resolve recursively?
In my browser (firefox and google chrome), when I add to
/etc/hosts:
127.0.0.1 google.com www.google.com
typing www.google.com into the address bar of the browsers and
hitting entering won't connect to the website. After I remove that
line from /etc/hosts, I can connect to the website. Does it mean
that /etc/hosts overrides DNS for resolving hostnames?
After I re-add the line to /etc/hosts, I can still connect to the
website, even after refreshing the webpage. Why doesn't
/etc/hosts apply again, so that I can't connect to the website?
Thanks.
|
This is dictated by the NSS (Name Service Switch) configuration i.e. /etc/nsswitch.conf file's hosts directive. For example, on my system:
hosts: files mdns4_minimal [NOTFOUND=return] dns
Here, files refers to the /etc/hosts file, and dns refers to the DNS system. And as you can imagine whichever comes first wins.
Also, see man 5 nsswitch.conf to get more idea on this.
As an aside, to follow the NSS host resolution orderings, use getent with hosts as database e.g.:
getent hosts example.com
| How do `/etc/hosts` and DNS work together to resolve hostnames to IP addresses? |
1,301,653,026,000 |
Seems my hosts file (/etc/hosts) points to /nix/store/gds7bha3bx0a22pnzw93pgf0666anpyr-etc-hosts and is read only.
How am I meant to modify this file?
|
Modify the nixos config (usually in /etc/nixos/configuration.nix) with:
networking.extraHosts =
''
127.0.0.2 other-localhost
10.0.0.1 server
'';
This is documented at NixOS Wiki and defined here.
| How do I modify my hosts file in Nixos? |
1,301,653,026,000 |
Changes to /etc/hosts file seem to take effect immediately. I'm curious about the implementation. What magic is used to achieve this feature?
Ask Ubuntu: After modifying /etc/hosts which service needs to be restarted?
NetApp Support: How the /etc/hosts file works
|
The magic is opening the /etc/hosts file and reading it:
strace -e trace=file wget -O /dev/null http://www.google.com http://www.facebook.com http://unix.stackexchange.com 2>&1 | grep hosts
open("/etc/hosts", O_RDONLY|O_CLOEXEC) = 4
open("/etc/hosts", O_RDONLY|O_CLOEXEC) = 5
open("/etc/hosts", O_RDONLY|O_CLOEXEC) = 4
The getaddrinfo(3) function, which is the only standard name resolving interface, will just open and read /etc/hosts each time it is called to resolve a hostname.
More sophisticated applications which are not using the standard getaddrinfo(3), but are still somehow adding /etc/hosts to the mix (e.g. the dnsmasq DNS server) may be using inotify(7) to monitor changes to the /etc/hosts files and re-read it only if needed.
Browsers and other such applications will not do that. They will open and read /etc/hosts each time they need to resolve a host name, even if they're not using libc's resolver directly, but are replicating its workings by other means.
| Why do changes to /etc/hosts take effect immediately? |
1,301,653,026,000 |
It seems that wildcards are not supported in the /etc/hosts file.
What is the best solution for me to resolve all *.local domains to localhost?
|
You'd really need to run your own DNS server and use wildcards. Exactly how you'd do that would depend on the DNS package you ran.
| Wildcard in /etc/hosts file |
1,301,653,026,000 |
I'm administrating a networked environment where the users authenticate over NIS.
All machines can be used to SSH into the server but one.
On the machine in question, I get the message
ssh: connect to host servername port 22: Connection refused
I compared strace outputs from the machine in question and a machine that can SSH into the server correctly.
It turns out the machine that can't SSH into the server doesn't consult /etc/hosts while the machine that can SSH correctly does. Both machines have /etc/hosts set up with the server's name and IP. In the end, the machine that doesn't consult /etc/hosts ends up trying to connect to 127.0.0.1 (localhost) and fails with the above message. What can be causing this?
Additional information:
The server I'm trying to SSH into also acts as the name server and both machines consult it while trying to SSH into it.
The machine that can't SSH into the server can SSH correctly into other machines when I do
ssh machinename
The strace logs show that the machine consults the nameserver (this time successfully) and manages to resolve the remote machine name correctly and connect to it.
EDIT: I will gladly provide any additional information that you think might help solve this issue.
|
It sounds to me like the problem host does not have a correctly configured nsswitch.conf.
The hosts line of /etc/nsswitch.conf should look something like this:
hosts: files nisplus nis dns
However, the exact contents will vary due to your environment. You should compare against working hosts and make changes accordingly.
| Why does SSH not consult /etc/hosts? |
1,301,653,026,000 |
What's the best way to confirm that your /etc/hosts file is mapping a hostname to the correct IP address?
Using a tool like dig queries an external DNS directly, bypassing the hosts file.
|
I tried this out and it seems to work as expected:
echo "1.2.3.4 facebook.com" >> /etc/hosts
Then I ran:
$ getent ahosts facebook.com
1.2.3.4 STREAM facebook.com
1.2.3.4 DGRAM
1.2.3.4 RA
| How to test /etc/hosts |
1,301,653,026,000 |
Possible Duplicate:
Can I create a user-specific hosts file to complement /etc/hosts?
In short: I would like to know if it is possible to get a ~/hosts file that could override the /etc/hosts file, since I don't have any privileged access.
A machine I am working on does seem to be properly configured with a correct DNS server. When I try to ping usual machine name I am working with, it fails. But when I try to ping them by IP address it works as expected.
I want to avoid changing any scripts and other musuculary memorized handcrafted commmand line ™ that I made because of a single unproperly configured machine. I contacted sys admin, but they have other fish to fry.
How can I implement that?
|
Beside the LD_PRELOAD tricks. A simple alternative if you're not using nscd is to copy libnss_files.so to some location of your own like:
mkdir -p -- ~/lib &&
cp /lib/x86_64-linux-gnu/libnss_files.so.2 ~/lib
Binary-edit the copy to replace /etc/hosts in there to something the same length like /tmp/hosts.
perl -pi -e 's:/etc/hosts:/tmp/hosts:g' ~/lib/libnss_files.so.2
Edit /tmp/hosts to add the entry you want. And use
export LD_LIBRARY_PATH=~/lib
for nss_files to look in /tmp/hosts instead of /etc/hosts.
Instead of /tmp/hosts, you could also make it /dev/fd//3, and do
exec 3< ~/hosts
For instance which would allow different commands to use different hosts files.
$ cat hosts
1.2.3.4 asdasd
$ LD_LIBRARY_PATH=~/lib getent hosts asdasd 3< ~/hosts
1.2.3.4 asdasd
If nscd is installed and running, you can bypass it by doing the same trick, but this time for libc.so.6 and replace the path to the nscd socket (something like /var/run/nscd/socket) with some nonexistent path.
| How can I override the /etc/hosts file at user level? [duplicate] |
1,301,653,026,000 |
The HOSTALIASES environment variable allows users to set their own host aliases instead of having to sudoedit /etc/hosts (more details, e.g., at http://blog.tremily.us/posts/HOSTALIASES/)
However, with /etc/hosts I can alias IP addresses to names and names to names, whereas HOSTALIASES only seems to work with name to name aliasing.
I tried:
cat > .hosts
work 10.10.0.1
g www.google.com
^D
export HOSTALIASES=$PWD/.hosts
and now
curl g #works
curl 10.10.0.1 #works
curl work #doesn't work
Can I make curl work work without needing to edit a file I don't have write permissions to (/etc/hosts) ?
|
HOSTALIASES feature is provided by the resolver funtion gethostbyname() in glibc. In this function an alias look up result is passed as-is to subsequent libnss module calls specified by hosts: in /etc/nsswitch.conf, therefore if there's no module which can handle it, gethostbyname() will end up with failure.
Note that in most programs numerical address notation like 10.10.0.1 and 2a00:1450:400c:c05::67 is processed by inet_aton() inet_pton() getaddrinfo() before gethostbyname() is called.
Some DNS servers, including dnsmasq, return valid address records to queries with a numerical address string as if inet_aton() applied to it: e.g. they return A record 10.10.0.1 to query for FQDN 10.10.0.1.. However, other servers including BIND just return NXDOMAIN for such queries. So you cannot rely on this to define work 10.10.0.1 in your HOSTALIASES as an alternative of /etc/hosts.
One possible workaround is to utilize a public DNS service like xip.io to get resolvable FQDNs for arbitrary IPv4 addresses. For example you can define work for 10.10.0.1 like this:
work 10.10.0.1.xip.io
| Hostaliases file with an IP address |
1,301,653,026,000 |
I am trying to make a productivity suite for myself. My first goal is to block Facebook, Gmail and Stackexchange from 0900 to 1600.
As of now, I have edited my /etc/hosts and added 0.0.0.0 www.facebook.com and similar ones for gmail and stackexchange.
But I am a little confused about how to include the blocking duration in my script.
What I thought is having 2 different files (hosts_allow, hosts_block) and then cp hosts_allow hosts or cp hosts_block hosts depending upon time but then this would need to be put in an infinite loop or something which I'm not really sure is the best way of approaching the problem.
Any clues?
|
Use cron.
Say crontab -e as root — or sudo crontab -e if you have sudo set up — and put the following in the file that comes up in the text editor:
0 9 * * * cp /etc/hosts_worktime /etc/hosts
0 16 * * * cp /etc/hosts_playtime /etc/hosts
This says that on the zeroth minute of the 9th and 16th hours of every day of the month, overwrite /etc/hosts using the shell commands given.
You might actually want something a little more complicated:
0 9 * * 1-5 cp /etc/hosts_worktime /etc/hosts
0 16 * * 1-5 cp /etc/hosts_playtime /etc/hosts
That one change — putting 1-5 in the fifth position — says the change between work and play time happens only on Monday through Friday.
Say man 5 crontab to get a full explanation of what all you can do in a crontab file.
By the way, I changed the names of your hosts files above, because hosts_allow is too close to hosts.allow, used by TCP Wrappers.
| Two different /etc/hosts depending upon the time |
1,301,653,026,000 |
More of a Amazon Web Services EC2 question, hopefully not too off topic. I have a vanilla Ubuntu instance with them, and powered it off. On reboot couldn't ssh to the FQDN because the external IP address had changed.
It costs extra for a "static", or even static, IP address? I would settle for semi-permanent. Elastic sounds about right, in marketing speak.
Or, is this a security measure? I did select to have an external IP address, but it is a free account. The documentation says:
An Elastic IP address is a static IPv4 address designed for dynamic
cloud computing. An Elastic IP address is associated with your AWS
account. With an Elastic IP address, you can mask the failure of an
instance or software by rapidly remapping the address to another
instance in your account.
An Elastic IP address is a public IPv4 address, which is reachable
from the Internet.
The external IP address, the one I used for ssh, is different from what I see when running ifconfig so they're using some form of NAT..as explained here:
Important
You can't manually disassociate the public IP address from your
instance after launch. Instead, it's automatically released in certain
cases, after which you cannot reuse it.
Even though I selected an external IP address, reboot appears (?) to be a circumstance where the IP address is released back into the pool. Please clarify that understanding.
|
The main difference between the two is that:
You will lose your Public IP when you Stop and Start the instance, while the EIP remains linked to the instance even after the Stop/Start operation (or until you don't explicitly detach it from the instance)
Concerning the costs, there's no recurring fee you will pay for the EIP usage, while you keep it attached to a running instance. Otherwise you will have to pay for the resource allocated but not used.
| How is an Elastic IP address different from a static IP address? |
1,301,653,026,000 |
With getent hosts localhost, I only get ::1, although I expect 127.0.0.1. I have IPv6 disabled, so getting ::1 is even more surprising. To add to the confusion, when I ping localhost, pings are sent to 127.0.0.1 which works. Can someone explain this?
~: getent hosts localhost
::1 localhost
~: grep 'hosts:' /etc/nsswitch.conf
hosts: files mymachines myhostname resolve [!UNAVAIL=return] dns
~: cat /etc/sysctl.d/disable_ipv6.conf
net.ipv6.conf.all.disable_ipv6=1
~: ping ::1
connect: Network is unreachable
~: ping 127.0.0.1
PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data.
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms
~: ping localhost
PING localhost (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.015 ms
edit: There is no localhost in my /etc/hosts.
|
Finding this wasn't easy (but fun :)).
Short answer
gethostbyname2(), which uses __lookup_name(), has some hard-coded values for the loopback ('lo') interface. When you specify 'localhost' to the 'getent hosts' command it ends up using the default value for IPv6 before it tries IPv4, thus you end up with ::1. You can change the code of getent in order to get 127.0.0.1 like so:
Download getent source code from github
Comment-out the following line (#329) in hosts_keys() under getent.c:
//else if ((host = gethostbyname2 (key[i], AF_INET6)) == NULL)
Compile and run from source:
Result:
$make clean && make && ./getent hosts localhost
rm -f *.o
rm -f getent
gcc -g -Wall -std=gnu99 -w -c getent.c -o getent.o
gcc getent.o -Wall -lm -o getent
127.0.0.1 localhost
More details
getent tool uses functions defined and implemented by the musl library. When we run the command
$getent hosts localhost
The tool calls the hosts_keys() function under getent.c in order to resolve the provided key. The function tries resolving by 4 methods:
gethostbyaddr for IPv6 (fails in this instance).
gethostbyaddr for IPv4 (fails in this instance).
gethostbyname2 for IPv6 (succeeds always for localhost due to hard-coded values).
gethostbyname2 for IPv4 (doesn't try due to success on #3).
All musl functions are implemented under /src/network/, see here. gethostbyname2() (implemented in gethostbyname2.c) calls gethostbyname2_r() (implemented in gethostbyname2_r.c), which calls __lookup_name() (in lookup_name.c). __lookup_name(), again, as a few options of how to resolve the host name, the first one being name_from_null (in the same file):
static int name_from_null(struct address buf[static 2], const char *name, int family, int flags)
{
int cnt = 0;
if (name) return 0;
if (flags & AI_PASSIVE) {
if (family != AF_INET6)
buf[cnt++] = (struct address){ .family = AF_INET };
if (family != AF_INET)
buf[cnt++] = (struct address){ .family = AF_INET6 };
} else {
if (family != AF_INET6)
buf[cnt++] = (struct address){ .family = AF_INET, .addr = { 127,0,0,1 } };
if (family != AF_INET)
buf[cnt++] = (struct address){ .family = AF_INET6, .addr = { [15] = 1 } };
}
return cnt;
}
At the very end, we can see that when family == AF_INET6 we will get the hard-coded value of ::1. Since getent tries IPv6 before IPv4, this would be the returned value. As I showed above, forcing resolve as IPv4 in getent will result in the hard coded 127.0.0.1 value from the function above.
If you wish to change the functionality to return IPv4 address for localhost, best thing would be to submit/request a fix for getent to search for IPv4 first.
Hope this helps!
| Why does localhost resolve to ::1 but not 127.0.0.1 |
1,301,653,026,000 |
I can parse the /etc/passwd with augtool:
myuser=bob
usershome=`augtool -L -A --transform "Passwd incl /etc/passwd" print "/files/etc/passwd/$myuser/home" | sed -En 's/\/.* = (.*)/\1/p'`
...but it seems a little too convoluted.
Is there any simple, dedicated tool for displaying users'home (like usermod can be used to change it)?
|
You should never parse /etc/passwd directly. You might be on a system with remote users, in which case they won't be in /etc/passwd. The /etc/passwd file might be somewhere else. Etc.
If you need direct access to the user database, use getent.
$ getent passwd phemmer
phemmer:*:1000:4:phemmer:/home/phemmer:/bin/zsh
$ getent passwd phemmer | awk -F: '{ print $6 }'
/home/phemmer
However there is also another way that doesn't involve parsing:
$ user=phemmer
$ eval echo "~$user"
/home/phemmer
The ~ operator in the shell expands to the specified user's home directory. However we have to use the eval because expansion of the variable $user happens after expansion of ~. So by using the eval and double quotes, you're effectively expanding $user first, then calling eval echo "~phemmer".
Once you have the home directory, simple tack /.ssh on to the end.
$ sshdir="$(eval echo "~$user/.ssh")"
$ echo "$sshdir"
/home/phemmer/.ssh
| What is the reliable way of getting each user's .ssh directory from bash? |
1,301,653,026,000 |
zsh is great so far.
I am using zsh-completions but still I am unable to get autocompletion for ssh commands like in bash as shown in below screenshot:
How to get hostnames from /etc/hosts for ssh | scp | telnet command autocompletion in zsh shell ?
Update 1:
https://github.com/sunlei/zsh-ssh : This SSH completion offers a greater array of features in comparison to the default SSH completion.
|
Zsh won't offer you host completions until you enable its full completion system. You can do so by adding the following to your .zshrc file:
autoload -Uz compinit
compinit
Once initialized, Zsh's completion system will retrieve host names from /etc/hosts, /etc/ssh/ssh_known_hosts, ~/.ssh/known_hosts and Host .../Name ... lines in ssh config files. Have a look inside those files.
Note that if you have aliased ssh to another command, zsh will offer completions for that other command. If you want it to offer the same completions for that other command as those for ssh, you need to tell it to do so, with something like this:
alias ssh=<some other command>
compdef _ssh <some other-command>=ssh
Alternatively, you can use a function instead of an alias:
ssh() {
<some other-command> "$@"
}
| bash like autocompletion for ssh command in zsh shell with /etc/hosts file? |
1,301,653,026,000 |
On my local DHCP network I have different PC's that I need to access remotely. Problem is that their IP's change. Sometimes I plug my laptop and netbook into other people's DHCP networks.
My current solution is to update the /etc/hosts file every time the target IP's change.
My /etc/hosts file looks like this:
# <ip-address> <hostname.domain.org> <hostname>
127.0.0.1 localhost.localdomain localhost laptop
192.168.1.14 desktop.localdomain desktop
192.168.1.12 netbook.localdomain netbook
Is there a way to bypass all that manual administration?
For example, could my computers broadcast their IPs on the LAN, or something like that? Windows does something like that, which allows you to reference a computer on the network with "\\COMPUTER_NAME"
|
It depends on what is doing the DHCP?
Most home routers use dnsmasq and you can use that as your local DNS server. You just need to set dnsmasq to return itself as the DNS server. Next, you need to make sure that your PCs broadcast a hostname during the DHCP request.
Then, voila, you should be able to resolve all your local machines through the DNS/DHCP server.
| Maintaining `/etc/hosts` for hosts on DHCP |
1,301,653,026,000 |
I was trying to get a perl script that contacts a PostgreSQL database to work on a server. This script was mysteriously failing. I then realised that localhost was not in the /etc/hosts file.
The file for this machine (currently running Debian lenny) currently looks like
127.0.0.1 machinename.domain machinename
xxx.xx.x.xxx machinename.domain machinename
The xxx.xx.x.xxx is the IP address. The file for my current home machine, which is a slightly older installation (currently running Debian squeeze) is
127.0.0.1 machinename localhost
127.0.1.1 machinename.domain machinename
My home machine sits behind a router and is not directly exposed to the internet. In any case, I'm on DSL and have no static IP address.
I've kept /etc under version control for my machines (using etckeeper) for some time, and I see that for this server, the following change was made by some mastermind (possibly myself) on Dec 17th 2009.
-127.0.0.1 localhost
+127.0.0.1 machinename.domain machinename
I've wondered before why this file is set up the way this is, but the answer is not obvious. Some questions:
Why 127.0.1.1? This might be a
Debian-specific bit of history. I
did a bit of searching on the net,
and found some vague mutterings about
Gnome, but little of any substance.
Where in Debian is the template this file is
set from?
Is there currently considered to be
a correct/best form for this file?
Is the order of the names in the
line significant? I hope not.
More generally, what is the explanation of why these two lines are structured the way they are?
For now, I think I'll change the server /etc/hosts to
127.0.0.1 machinename.domain machinename localhost
xxx.xx.x.xxx machinename.domain machinename
Comments?
|
On my system, I have the following in /var/lib/dpkg/info/netbase.postinst:
create_hosts_file() {
if [ -e /etc/hosts ]; then return 0; fi
cat > /etc/hosts <<-EOF
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
EOF
My netbase version is 4.45.
I would expect that the first name on the line will be returned for a reverse lookup of the IP address, otherwise I doubt the order matters.
| /etc/hosts for Debian |
1,301,653,026,000 |
I use a hosts file to block ads/malware domains. When I use ssh or scp in zsh and try to tab-complete, it takes a good 5-10 seconds before anything appears and what does appear is usually a list of 20+ domains I have blocked, and buried in there is the file I need.
I've search online quite a bit, and found helpful hints of how to add this sort of auto-completion, but I can't figure out how to remove it. I basically never want the tab-complete to search my hosts file (I use known-hosts for the servers I actually do want to tab-complete to).
This is on OS X.
|
Try adding the following to your zshrc
zstyle ':completion:*' hosts off
| ignore hosts file in ZSH ssh/scp tab-complete |
1,301,653,026,000 |
I have a problem when I send a mail to [email protected] for [email protected] My emails has bounced for this error: status=bounced (mail for orbialia.es loops back to myself)
May 9 09:33:58 ns3285243 postfix/smtpd[1606]: connect from localhost.localdomain[127.0.0.1]
May 9 09:33:58 ns3285243 postfix/smtpd[1606]: 1EF6FA1DB9: client=localhost.localdomain[127.0.0.1]
May 9 09:33:58 ns3285243 postfix/cleanup[1584]: 1EF6FA1DB9: message-id=<[email protected]>
May 9 09:33:58 ns3285243 postfix/qmgr[1575]: 1EF6FA1DB9: from=<[email protected]>, size=7184, nrcpt=1 (queue active)
May 9 09:33:58 ns3285243 postfix/smtpd[1606]: disconnect from localhost.localdomain[127.0.0.1]
May 9 09:33:58 ns3285243 amavis[15721]: (15721-16) Passed CLEAN {RelayedInbound}, [79.145.170.251]:1991 [79.145.170.251] <[email protected]> -> <[email protected]>, Queue-ID: 9D0DCA1DB4, Message-ID: <[email protected]>, mail_id: 8JHrgdOkE3Pw, Hits: -0.999, size: 6675, queued_as: 1EF6FA1DB9, 26690 ms
May 9 09:33:58 ns3285243 postfix/smtp[1588]: 9D0DCA1DB4: to=<[email protected]>, relay=127.0.0.1[127.0.0.1]:10024, delay=28, delays=1/0.02/0.01/27, dsn=2.0.0, status=sent (250 2.0.0 from MTA(smtp:[127.0.0.1]:10025): 250 2.0.0 Ok: queued as 1EF6FA1DB9)
May 9 09:33:58 ns3285243 postfix/qmgr[1575]: 9D0DCA1DB4: removed
May 9 09:33:58 ns3285243 postfix/smtp[1607]: 1EF6FA1DB9: to=<[email protected]>, relay=none, delay=0.11, delays=0.09/0.02/0/0, dsn=5.4.6, status=bounced (mail for orbialia.es loops back to myself)
May 9 09:33:58 ns3285243 postfix/cleanup[1584]: 42EC4A1DB4: message-id=<[email protected]>
May 9 09:33:58 ns3285243 postfix/bounce[1608]: 1EF6FA1DB9: sender non-delivery notification: 42EC4A1DB4
May 9 09:33:58 ns3285243 postfix/qmgr[1575]: 42EC4A1DB4: from=<>, size=9465, nrcpt=1 (queue active)
May 9 09:33:58 ns3285243 postfix/qmgr[1575]: 1EF6FA1DB9: removed
May 9 09:33:58 ns3285243 postfix/smtp[1607]: 42EC4A1DB4: to=<[email protected]>, relay=none, delay=0.05, delays=0.04/0/0/0, dsn=5.4.6, status=bounced (mail for orbialia.es loops back to myself)
May 9 09:33:58 ns3285243 postfix/qmgr[1575]: 42EC4A1DB4: removed
This is my postfix config
alias_database = hash:/etc/aliases
alias_maps = hash:/etc/aliases
append_dot_mydomain = no
biff = no
config_directory = /etc/postfix
content_filter = smtp-amavis:[127.0.0.1]:10024
inet_interfaces = all
mailbox_command = procmail -a "$EXTENSION"
mailbox_size_limit = 0
milter_default_action = accept
milter_protocol = 2
mydestination = localhost, ns3285243.ip-5-135-177.eu
myhostname = rentabiliza.net
mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128
myorigin = /etc/mailname
non_smtpd_milters = inet:localhost:8891
policy-spf_time_limit = 3600s
readme_directory = no
recipient_delimiter = +
relayhost =
smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu)
smtpd_milters = inet:localhost:8891
smtpd_recipient_restrictions = permit_sasl_authenticated, permit_mynetworks, reject_unauth_destination check_policy_service unix:private/policy-spf
smtpd_sasl_auth_enable = yes
smtpd_sasl_path = private/auth
smtpd_sasl_type = dovecot
smtpd_tls_auth_only = yes
smtpd_tls_cert_file = /etc/ssl/certs/dovecot.pem
smtpd_tls_key_file = /etc/ssl/private/dovecot.pem
smtpd_use_tls = yes
virtual_alias_domains = mysql:/etc/postfix/sql-domain-aliases.cf
virtual_alias_maps = mysql:/etc/postfix/sql-aliases.cf, mysql:/etc/postfix/sql-domain-aliases-mailboxes.cf, mysql:/etc/postfix/sql-email2email.cf, mysql:/etc/postfix/sql-catchall-aliases.cf
virtual_mailbox_domains = mysql:/etc/postfix/sql-domains.cf
virtual_mailbox_maps = mysql:/etc/postfix/sql-mailboxes.cf
virtual_transport = lmtp:unix:private/dovecot-lmtp
I use virtual domains/users in a mysql database
hostname: ns3285243.ip-5-135-177.eu
hostname -f: ns3285243.ip-5-135-177.eu
/etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
5.135.177.115 ns3285243.ip-5-135-177.eu ns3285243
2001:41D0:8:B873::1 ns3285243.ip-5-135-177.eu ns3285243
# The following lines are desirable for IPv6 capable hosts
#(added automatically by netbase upgrade)
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
When I try to send mail to gmail.com to destination it receives the email successfully.
If I put orbialia.es to mydestination I receive this:
May 9 09:47:28 ns3285243 postfix/smtpd[2601]: NOQUEUE: reject: RCPT from 251.Red-79-145-170.dynamicIP.rima-tde.net[79.145.170.251]: 550 5.1.1 <[email protected]>: Recipient address rejected: User unknown in local recipient table; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<50l3rport>
I have multiple virtual domains. How can I resolve this?
|
From comment:
The problem in this case is that the domain has not been marked as active, so when mysql is queried for active domains this one is not returned.
| Postfix email bounced (mail for domain loops back to myself) |
1,301,653,026,000 |
I would like to set one IP for a zone (livejournal.com)
currently I am having to directly type the subdomains like:
11.11.11.11 sub1.livejournal.com
11.11.11.11 sub2.livejournal.com
11.11.11.11 sub3.livejournal.com
etc.
I tried
11.11.11.11 *.livejournal.com
and
11.11.11.11 .livejournal.com
didn't help.
So I want to have only one line and resolve missing subdomains to IP like: sub1000.livejournal.com without explicitly specifying it
|
This can be implemented with a DNS forwarder than acts like a very basic DNS server. The popular implementation is Dnsmaq, however this might be possible with services like OpenDNS that can perform DNS filtering for you.
| /etc/hosts file syntax. Is it possible to set one IP for a zone? |
1,301,653,026,000 |
So I have an IP Address 5x.2x.2xx.1xx I want to map to localhost. In my hosts file I have:
cat /etc/hosts
127.0.1.1 test test
127.0.0.1 localhost
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
5x.2x.2xx.1xx 127.0.0.1
What I want to accomplish is that when I connect in this machine to 5x.2x.2xx.1xx, I go to localhost.
What I really want is to connect to MySQL using
mysql -uroot 5x.2x.2xx.1xx -p and instead of pointing to that IP address I want to use the local MySQL server
At the time it isn't working since it stills redirect to the server's IP (5x.2x.2xx.1xx)
I've also tried: sudo service nscd restart with no luck
|
/etc/hosts can be used if you want to map a specific DNS name to a different IP address than it really has, but if the IP address is already specified by the application, that and any other techniques based on manipulating hostname resolution will be useless: the application already has a perfectly good IP address to connect to, so it does not need any hostname resolution services.
If you want to redirect traffic that is going out to a specified IP address back to your local system, you'll need iptables for that.
sudo iptables -t nat -I OUTPUT --dst 5x.2x.2xx.1xx -p tcp --dport 3306 -j REDIRECT --to-ports 3306
This will redirect any outgoing connections from your system to the default MySQL port 3306 of 5x.2x.2xx.1xx back to port 3306 of your own system. Replace the 5x.2x.2xx.1xx and 3306 with the real IP address and port numbers, obviously.
The above command will be effective immediately, but will not persist over a reboot unless you do something else to make the settings persistent, but perhaps you don't even need that?
| How to map an IP address to localhost |
1,301,653,026,000 |
Let's say a website does not have a domain name like www.google.com and the only way to connect to it is to use an IP address like 216.58.212.68. If I add an entry in /etc/hosts that looks likes 0.0.0.0 216.58.212.68, will it block connections to that website? Would a web browser be blocked from visiting it too?
Additionally, would this also apply to local addresses like 192.168.0.1?
|
No. The hosts file doesn't affect any routing. It only affects name lookups. Since 216.58.212.68 is an IP address, the system won't look it up in the hosts table.
Read here for more info on the hosts file: http://manpages.ubuntu.com/manpages/trusty/man5/hosts.5.html
If you want to block connections to an IP address from your system, there are a couple of ways to do that, like:
Blackhole traffic using the route command:
route add 216.58.212.68 gw 127.0.0.1 lo
Reject traffic using the route command:
route add -host 216.58.212.68 reject
Null route using the ip command:
ip route add blackhole 216.58.212.68/32
Now, if you want to block traffic to a system by name, you can add a fake entry to your hosts file pointing that name to the loopback address:
127.0.0.1 badactor.evil.com
Then any traffic trying to get to that host from your system will be faked out – as long as your system is set to use the hosts file lookup prior to DNS. Any specifically DNS based lookups will still work, although you could use a DNSMASQ server like a Pi Hole to block even DNS lookups.
Make sure you read the man pages for the route and ip commands, so you'll understand how to make these commands persistent across reboots if you need them to be.
| Ubuntu /etc/hosts addresses in the form *.*.*.* |
1,301,653,026,000 |
If I want to add an entry to /etc/hosts to resolve all traffic for example.com to 1.2.3.4, do I need to add
1.2.3.4 example.com
1.2.3.4 www.example.com
1.2.3.4 smtp.example.com
1.2.3.4 pop.example.com
...
Or will just adding
1.2.3.4 example.com
suffice?
|
You will need to specify each and every subdomain. If that is not what you want, you should look into installing a real DNS server (e.g. bind9).
This is rather easy to check by first adding just example.com to /etc/hosts and then do
ping -c 2 example.com
ping -c 2 www.example.com
The first will succeed with the provided IP address. The second will go to 93.184.216.119 (the IP address in the internet for www.example.com)
| Should /etc/hosts contain the domain name or the FQDN? |
1,301,653,026,000 |
I have a machine that I can only access using SSH.
I was messing with the hostnames, and now it says:
ssh: unable to resolve hostname
I know how to fix it in /etc/hosts.
Problem is, I need sudo to fix them because my normal account doesn't have permissions.
What's the best way to fix the hosts?
|
You don't need sudo to fix that, try pkexec,
pkexec nano /etc/hosts
pkexec nano /etc/hostname
After running pkexec nano /etc/hosts, add your new hostname in the line that starts with 127.0.1.1 like below,
127.0.0.1 localhost
127.0.1.1 your-hostname
And also don't forget to add your hostname inside /etc/hostname file after running pkexec nano /etc/hostname command,
your-hostname
Restart your PC. Now it works.
| How to edit /etc/hosts without sudo? |
1,301,653,026,000 |
I'm learning about networking, at home I have two physical machines and a bunch of VMs that I use to test my applications each machine has a different hostname and I map them manually in each /etc/hosts files
I would like to know what's the difference between home IP adress (127.0.0.1) and a real IP address given by the network in /etc/hosts
for example
let's say my IP address is 192.168.2.20 and the name host is naruto and my /etc/hosts looks like this:
127.0.0.1 localhost
192.168.2.20 naruto
127.0.0.1 naruto
all lines point to the same machine I understand that the main difference is how programs connect to each of them two are using loopback device and the other one is using a nic. my question is should I have all these lines? or what lines should I have? what's the use of each of them?
I was reading this post but it didn't help, I got more confused
|
Question #1
I would like to know what's the difference between home IP adress (127.0.0.1) and a real IP address given by the network in /etc/hosts
2 key characteristics of 127.0.0.1:
It's not routable outside of your computer on the Internet.
The IP address 127.0.0.1 is part of a block of IP addresses that are associated to this interface on your system.
For example, take a look at your loopback interface, lo:
$ ip a l lo
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
The block of IPs is designed by this line:
inet 127.0.0.1/8 scope host lo
The /8 in this notation means that 8 bits of the 32-bits being presented here are the network's address, the remaining bits (32-8 = 24) are for addressing whatever you want within this computer.
We can convince ourselves that this is a range and they all point back to ourselves by trying to ping a couple of them. Let's ping 127.0.0.1, 127.0.0.2, & 127.0.0.3:
$ ping -c2 127.0.0.1
PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data.
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms
64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.055 ms
$ ping -c2 127.0.0.2
PING 127.0.0.2 (127.0.0.2) 56(84) bytes of data.
64 bytes from 127.0.0.2: icmp_seq=1 ttl=64 time=0.029 ms
64 bytes from 127.0.0.2: icmp_seq=2 ttl=64 time=0.052 ms
$ ping -c2 127.0.0.3
PING 127.0.0.3 (127.0.0.3) 56(84) bytes of data.
64 bytes from 127.0.0.3: icmp_seq=1 ttl=64 time=0.030 ms
64 bytes from 127.0.0.3: icmp_seq=2 ttl=64 time=0.075 ms
NOTE: We can see that all these were "pingable" back to ourselves through our loopback interface.
Using traceroute shows the same thing:
$ traceroute -n 127.0.0.1
traceroute to 127.0.0.1 (127.0.0.1), 30 hops max, 60 byte packets
1 127.0.0.1 0.032 ms 0.041 ms 0.010 ms
$ traceroute -n 127.0.0.2
traceroute to 127.0.0.2 (127.0.0.2), 30 hops max, 60 byte packets
1 127.0.0.2 0.033 ms 0.009 ms 0.008 ms
$ traceroute -n 127.0.0.3
traceroute to 127.0.0.3 (127.0.0.3), 30 hops max, 60 byte packets
1 127.0.0.3 0.034 ms 0.010 ms 0.008 ms
Question #2
my question is should I have all these lines? or what lines should I have? what's the use of each of them?
My recommendation would be to not assign any names to 127.0.0.1 except for whatever the system automatically assigned to it. Typically you'll see these types of entries in /etc/hosts:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
If I want to assign additional localhost type IPs for my system's hostname, then I'd use 127.0.0.2 instead, leaving 127.0.0.1 as it was setup by default.
Further still, for actual IP addresses that are assigned to my host, I'd either assign them like so in /etc/hosts or use DNS:
192.168.2.20 naruto.mydom.com naruto
But I would never assign the same name to 2 separate lines. This will never work, since the /etc/hosts file will only respond with the 1st entry, and the 2nd can never be reached.
For one off type work, using /etc/hosts is easy for local work. But if you expect any of the name to IP mappings to be accessible on your network, it's better to use DNS for those and forgo using /etc/hosts for anything but local IP/name resolution.
| how to configure /etc/hosts properly [duplicate] |
1,301,653,026,000 |
I have a Ubuntu Server 12.04 (amd64) machine on which, when I change /etc/hosts, the changes aren't picked up, even after a reboot. I am using /usr/bin/host to test, but none of the other programs seems to pick it up either.
This is a server and nscd and dnsmasq aren't installed. Also, the file /etc/nsswitch.conf contains the line:
hosts: files dns
so that I would expect it to work. I also checked that the mtime of the file changes with editing and tried running service networking restart (against all odds) and also resolvconf -u.
All commands where run as root where needed. The machine has network configured manually in /etc/network/interfaces and not via Network Manager (it isn't installed either).
Basically what I want to achieve is that the IP for a few hosts can be manipulated. The reason being that inside our network I get an IP to which I have no route, but I can use the external IP for that service via HTTPS.
What am I missing?
Note: no DNS server is locally running and the nameserver lines in /etc/resolv.conf (and the respective lines in interfaces) point to the DNS server that gives me the wrong IP.
Also note: I've searched on the web and read through the "similar questions", but my case doesn't seem to be covered.
/etc/host.conf is:
# The "order" line is only used by old versions of the C library.
order hosts,bind
multi on
|
The host command doesn't check the hosts file. From the manpage:
host is a simple utility for performing DNS lookups.
If you want to test lookups while respecting the hosts file, then use ping or getent.
$ tail -1 /etc/hosts
127.0.0.1 google.com
$ ping -c1 google.com | head -1
PING google.com (127.0.0.1) 56(84) bytes of data.
$ getent ahosts google.com
127.0.0.1 STREAM google.com
127.0.0.1 DGRAM
127.0.0.1 RAW
| /usr/bin/host not picking up changes to /etc/hosts even after reboot |
1,301,653,026,000 |
I have a computer with Debian at home, acting as a server with two Ethernet cards: eth0 connected to the router in DHCP mode and eth1 to a switch (static address) that holds four more computers.
I'm using the PC to be the gateway-firewall of the others. Since I only have four more PCs in the internal network, I don't want to set up BIND on the server. It is easier to use the file hosts to solve the names of the four PCs, but I can't make the server look into the file /etc/hosts. The server has no configuration at all; it's only using the defaults gotten from my ISP. How can I make the server resolve the addresses in the file hosts?
|
That's because the /etc/hosts is simply a file on your Debian server that it utilizes for its own name resolution.
It doesn't use the file to provide any DNS services.
Since you don't want to set up BIND can I recommend that you look at dnsmasq instead?
It's lightweight and can act as a DNS and DHCP server, simply by making use of your hosts file.
| How to make my server use the hosts file to resolve names? |
1,301,653,026,000 |
I want to disable DNS lookups via DNS server and only resolve host names that are listed in /etc/hosts. I am on Raspbian 9. How would I set this up?
|
Remove dns from hosts field in /etc/nsswitch.conf:
hosts: files
You might also want to remove the DNS servers from /etc/resolv.conf.
| Only use /etc/hosts for resolving hostnames on Linux |
1,301,653,026,000 |
I've been having a problem where when our vps provider decides to restart the server (running Debian 5.0.8), the server fails to remember changes to /etc/hosts. All I need is an database alias that is used for the web applications on the server which points to 127.0.0.1 (localhost).
I want it to look like this:
# The following lines are desirable for IPv6 capable hosts
# (added automatically by netbase upgrade)
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
127.0.0.1 localhost.localdomain localhost webservice database
# Auto-generated hostname. Please do not remove this comment.
XXX.XX.XXX.XX xxxxxx.net.au xxxxxx www.xxxxxxx.net.au xxxxxxx
However whenever there is a reboot it resets itself to:
# The following lines are desirable for IPv6 capable hosts
# (added automatically by netbase upgrade)
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
127.0.0.1 localhost.localdomain localhost webservice
# Auto-generated hostname. Please do not remove this comment.
XXX.XX.XXX.XX xxxxxx.net.au xxxxxx www.xxxxxxx.net.au xxxxxxx
without the database, and I have to manually change the file to get things to work. This has been happening for awhile and has become a nuisance, but I can't seem to find a way to get changes to stick. Anyone know what to do?
|
Removing the #Auto-generated hostname line and then making the changes caused whatever was generating the host names to remember it now. Works for me but this may not work for everyone.
| /etc/hosts in debian resets itself on reboot |
1,301,653,026,000 |
I have a server at home, with internal IP 192.168.1.100. I use a dynamic DNS service so that I can reach it at http://foo.dynu.com when I am out. When I have my laptop at home, I know that I could directly connect to the server by adding the following line to /etc/hosts.
192.168.1.100 foo.dynu.com
However, is there a way to automatically apply this redirect only when I'm on my home network? (I usually connect via a particular wifi connection, although I occasionally connect via ethernet. If this complicates matters, then I'm happy to only set it for the wifi connection.) I use Network Manager.
Also, I connect to the internet via a VPN, so presumably any configuration on my (OpenWRT) router is unlikely to work.
|
As per @garethTheRed's suggestion in the comments, I created a Network Manager Dispatcher hook.
Create the following file at /etc/NetworkManager/dispatcher.d/99_foo.dynu.com.sh. This progresses when a new network connection is detected (i.e. ethernet or wifi). It then identifies my "home network" in two ways: the BSSID/SSID and the static IP that my router assigns me. (At the moment it doesn't work when I connect via ethernet, since that's relatively rare.) It then appends the mapping to the hosts file if we are in the home network; if not, then it removes this line.
#!/bin/sh
# Map domain name to internal IP when connected to home network (via wifi)
# Partially inspired by http://sysadminsjourney.com/content/2008/12/18/use-networkmanager-launch-scripts-based-network-location/
WIFI_ID_TEST='Connected to 11:11:11:11:11:11 (on wlp3s0)
SSID: WifiName'
LOCAL_IP_TEST='192.168.1.90'
MAPPING='192.168.1.100 foo.dynu.com'
HOSTS_PATH=/etc/hosts
IF=$1
STATUS=$2
# Either wifi or ethernet goes up
if [ "$STATUS" = 'up' ] && { [ "$IF" = 'wlp3s0' ] || [ "$IF" = 'enp10s0' ]; }; then
# BSSID and my static IP, i.e. home network
if [ "$(iw dev wlp3s0 link | head -n 2)" = "$WIFI_ID_TEST" ] && [ -n "$(ip addr show wlp3s0 to ${LOCAL_IP_TEST})" ]; then
grep -qx "$MAPPING" "$HOSTS_PATH" || echo "$MAPPING" >> "$HOSTS_PATH"
else
ESC_MAPPING="^$(<<<"$MAPPING" sed 's/\./\\./g')$"
sed -i "/${ESC_MAPPING}/d" "$HOSTS_PATH"
fi
fi
| Can I map an internal IP address to a domain name, when on a particular network? |
1,301,653,026,000 |
I'm looking for a DRY solution that would allow me both
ssh server-alias
and
ping server-alias
I know the former is reading .ssh/config, and the latter /etc/hosts, but I'd rather not repeat the same aliases in both files. Is it a good idea to have a single source for both? If so, how can I accomplish it?
I can simply omit HostName line from the .ssh/config entry, and it will be read from /etc/hosts, does this approach have any disadvantages? If so, what should I use instead? It's intended for a single-user laptop, and no host masking is required.
|
Yes. There is no need to clutter your .ssh/config with Hostnames if all you want is tab expansion.
Entries in /etc/hosts will be sufficient.
.ssh/config is for specifying login names etc.
Maybe you should use wildcards in your hostnames in .ssh/config if you really want to specify connection details there at all.
http://linux.die.net/man/5/ssh_config
| Single source for both .ssh/config and /etc/hosts |
1,301,653,026,000 |
A company I'm working in has a local site, with documentation and etc. where all references are set to "srv-moss" as a domain name, which should be defined in hosts file. I added it there but it doesn't seem to work. Even though I can ping srv-moss just fine, but attempts to open it in Firefox or Chromium results in Server DNS said: Server Failure: The name server was unable to process this query.
What should I do? I tried a bunch of solutions that I found for similar problems, but neither worked for me.
|
If you are behind a proxy check that Firefox has that configured as well. I am behind a proxy and yes, I had to configure Firefox (when I used it) to connect to certain IPs (or aliases). Just insert the exception for srv-moss in the proper Network tab.
If your proxy configured via some kind of a system settings, then there should be an «exceptions» option too. If you're configured a proxy via /etc/environment file, so you have to add there a variables no_proxy="srv-moss", NO_PROXY="srv-moss"(to add more than one exception use a comma as a separator). Don't forget to re-login in the system, so the /etc/environment would be read again.
| Browsers doesn't see aliases in /etc/hosts |
1,396,511,032,000 |
How is 127.0.0.1 related to 127.0.0.2?
Using ssh to login to tleilax (OpenSuSE):
tleilax:~ #
tleilax:~ # hostname
tleilax
tleilax:~ #
tleilax:~ # hostname -f
tleilax.bounceme.net
tleilax:~ #
tleilax:~ # cat /etc/hosts
#
# hosts This file describes a number of hostname-to-address
# mappings for the TCP/IP subsystem. It is mostly
# used at boot time, when no name servers are running.
# On small systems, this file can be used instead of a
# "named" name server.
# Syntax:
#
# IP-Address Full-Qualified-Hostname Short-Hostname
#
127.0.0.1 localhost
# special IPv6 addresses
::1 localhost ipv6-localhost ipv6-loopback
fe00::0 ipv6-localnet
ff00::0 ipv6-mcastprefix
ff02::1 ipv6-allnodes
ff02::2 ipv6-allrouters
ff02::3 ipv6-allhosts
127.0.0.2 tleilax.bounceme.net tleilax
tleilax:~ #
tleilax:~ # exit
logout
Connection to 192.168.1.4 closed.
logged into doge (Ubuntu):
thufir@doge:~$
thufir@doge:~$ hostname
doge
thufir@doge:~$
thufir@doge:~$ hostname -f
doge.bounceme.net
thufir@doge:~$
thufir@doge:~$ cat /etc/hosts
127.0.0.1 localhost
127.0.1.1 doge.bounceme.net doge
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
thufir@doge:~$
My understanding is that 127.0.0.1 and 127.0.1.1. are, at least in Ubuntu, used for the hostname:
Somme years ago Thomas Hood started a discussion[0] about how the system
hostname should be resolved.
The eventual result[1] was that Debian nowadays ships /etc/hosts like
these per default:
127.0.0.1 localhost
127.0.1.1 <host_name>.<domain_name> <host_name>
As also described in the Debian reference[2].
I had a short mail conversation with Thomas and he proposed bringing up
the following at d-d.
https://lists.debian.org/debian-devel/2013/07/msg00809.html
On tleilax I used yast -- why does it give an IP address of 127.0.0.2? Is that any different from 127.0.0.1? Is it just an artifact of using yast?
Finally, will it muck up yast in any way were I to change 127.0.0.2 to 127.0.0.1? I'm guessing it doesn't really matter -- I'm more curious about it.
To what extent is this just convention within each distro, versus a wider requirement for how IPv4 addresses work?
|
your loopback devicelo is bound to the network 127/8 (aka 127.0.0.1/255.0.0.0), thus any address in the range 127.0.0.1 to 127.255.255.254 is your local loopback.
therefore, it doesn't matter whether you use 127.0.0.1 or 127.0.0.2.
the reason why they (Debian) chose the scheme they chose are explained in the Debian-reference (and it's really a workaround for a bug)
| why are there different formats for the hosts file between OpenSuSE and Ubuntu? |
1,396,511,032,000 |
I'm being trolled by China, and don't know why I can't block their request to my server.
//host.deny
ALL: item.taobao.com
ALL: 117.25.128.*
But when I watch the error log on my webserver tail -f /var/log/apache2/error.log the requests are still being allowed through.
Question: Why isn't my /etc/hosts.deny config working?
|
The file is called /etc/hosts.deny, not host.deny
Not all services use tcp-wrappers. sshd, for example, doesn't by default. Neither does apache.
You can use iptables to block all packets from 117.25.128/24, e.g.:
iptables -I INPUT -s 117.25.128.0/24 -j DROP
Even better, you can use fail2ban to monitor a log file (such as apache's access.log and/or error.log) and automatically block IP addresses trying to attack your server.
From the debian fail2ban package description:
Fail2ban monitors log files (e.g. /var/log/auth.log,
/var/log/apache/access.log) and temporarily or persistently bans
failure-prone addresses by updating existing firewall rules.
Fail2ban allows easy specification of different actions to be taken such as to ban an IP using iptables or hosts.deny rules, or simply to send a notification email.
By default, it comes with filter expressions for various services
(sshd, apache, qmail, proftpd, sasl etc.) but configuration can be
easily extended for monitoring any other text file. All filters and
actions are given in the config files, thus fail2ban can be adopted
to be used with a variety of files and firewalls.
| Editing my /etc/hosts.deny |
1,396,511,032,000 |
I'm very new in bash scripting and unix so I will need some help on this.I have 7-10 hosts which I want to ping from one of the servers via cronjobs. What I want is when host is up to execute command on it. When is down do nothing.
I don't want logs or any messages.
So far I have this and unfortunately don't have ability to try it right now. If you can just check it and point me.
#!/bin/bash
servers=( "1.1.1.1" "2.2.2.2" "3.3.3.3" "4.4.4.4" "5.5.5.5" "6.6.6.6" "7.7.7.7" )
for i in "${servers[@]}"
do
ping -c 1 $i > /dev/null
done
ping -c 1 $i > /dev/null
if [ $? -ne 0 ]; then
if [ $STATUS >= 2 ]; then
echo ""
fi
else
while [ $STATUS <= 1 ];
do
# command should be here where is status 1 ( i.e. Alive )
/usr/bin/snmptrap -v 2c -c public ...
done
fi
I'm not sure if this is right or no. I've used this from one tutorial and there is some things that I'm not sure what they exactly do.
Am I on right way here or I'm totaly wrong?
|
I've made some comments in line to explain what different parts of the script are doing. I've then made a concise version of the script below.
#!/bin/bash
servers=( "1.1.1.1" "2.2.2.2" "3.3.3.3" "4.4.4.4" "5.5.5.5" "6.6.6.6" "7.7.7.7" )
# As is, this bit doesn't do anything. It just pings each server one time
# but doesn't save the output
for i in "${servers[@]}"
do
ping -c 1 $i > /dev/null
# done
# "done" marks the end of the for-loop. You don't want it to end yet so I
# comment it out
# You've already done this above so I'm commenting it out
#ping -c 1 $i > /dev/null
# $? is the exit status of the previous job (in this case, ping). 0 means
# the ping was successful, 1 means not successful.
# so this statement reads "If the exit status ($?) does not equal (-ne) zero
if [ $? -ne 0 ]; then
# I can't make sense of why this is here or what $STATUS is from
# You say when the host is down you want it to do nothing so let's do
# nothing
#if [ $STATUS >= 2 ]; then
# echo ""
#fi
true
else
# I still don't know what $STATUS is
#while [ $STATUS <= 1 ];
#do
# command should be here where is status 1 ( i.e. Alive )
/usr/bin/snmptrap -v 2c -c public ...
#done
fi
# Now we end the for-loop from the top
done
If you need a parameter for each server, create an array of parameters and an index variable in the for-loop. Access the parameter via the index:
#!/bin/bash
servers=( "1.1.1.1" "2.2.2.2" "3.3.3.3" "4.4.4.4" "5.5.5.5" "6.6.6.6" "7.7.7.7" )
params=(PARAM1 PARAM2 PARAM3 PARAM4 PARAM5 PARAM6 PARAM7)
n=0
for i in "${servers[@]}"; do
ping -c 1 $i > /dev/null
if [ $? -eq 0 ]; then
/usr/bin/snmptrap -v 2c -c public ${params[$n]} ...
fi
let $((n+=1)) # increment n by one
done
| Ping multiple hosts and execute command |
1,396,511,032,000 |
I am trying to capture some IP packets in my personal LAN with tcpdump, through the command
tcpdump -A -i eth0 'tcp and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0) and host 192.168.1.x'
where 192.168.1.x is the target host in the network. But it works only if the target host is my ip, that is the IP of the machine where tcpdump runs. With any other IP in the network it doesn't record anything.
I have an unmanaged switch. The simpler command
tcpdump -A 'host 192.168.1.x'
detects only some igmp packets periodically sent to a broadcasting address, 224.0.0.251, from the host 192.168.1.x.
But how could the IP capture be performed?
Thank you anyway!
|
That's probably because a switch only sends traffic down a port if it believes the destination MAC address is attached to that port.
On a managed switch, you'd set up monitor mode.
On an unmanaged switch, you're left with a couple of options:
ARP spoofing, to trick the rest of the network about which MAC address corresponds to the target IP address. You then re-send the packets with the correct MAC address.
Flooding the switch with enough MAC addresses that it gives up and forwards all packets to every port. May or may not work.
Replacing the switch with either a hub (always forwards all packets to all ports, but also only half-duplex and a bunch of other disadvantages) or a managed switch. A unix box with sufficient ethernet ports can function as a managed switch.
If you care about Internet traffic only, run tcpdump on the gateway.
Install a network tap on the Ethernet cable going to the machine sending or receiving the traffic you want to monitor
The dsniff tools arpspoof and macof implement the first two.
(These may also bring down your network, and require you to reboot various bits of network gear. Since its your own network, that's your business; doing it on anyone else's network—at least without their permission—would be considered anything from extremely impolite to downright illegal.)
| tcpdump command works only on local ip |
1,396,511,032,000 |
I'd like to have my system automatically take me to https://facebook.com even if I put http://facebook.com into my browser. I can get /etc/hosts to redirect to me to different domains, but it seems to ignore it if I put https:// into it.
This works, and it redirects facebook.com to google.com:
74.125.95.103 facebook.com
This does not
https://74.125.95.103 facebook.com
My guess is that you can't use text at all in the part where the IP is, but how do I force it to use https? Is this possible in iptables?
|
No, it is not possible using iptables.
If you used it to redirect port 80 to port 443, your browser would still speak to it using http rather than https, and all you would get is garbage.
Maybe something using a Squid proxy would work. You could make it a transparent proxy if you can't change everyone's proxy settings.
Or, if it's just for Facebook, there is a new per-user setting to force HTTPS that might work for you when it is rolled out.
Or, if you're using Firefox, check out HTTPS Everywhere.
| Using /etc/hosts or iptables to redirect site to https:// version |
1,396,511,032,000 |
The title pretty much sums it up.
I have some (minimal) content in /etc/hosts.
The file looks something like:
# some comment
127.0.0.1 localhost foo bar
1.2.3.4 baz
::1 localhost ip6-loopback ip6-localhost
I accidentally pressed Super_L+Space today when typing in bash, which immediately presented me something formatted like the output of ls, but listing all the functional content (not the comments) of that file except ip4-addresses, like that:
me@host$
::1 localhost foo bar
baz ip6-loopback ip6-localhost
me@host$ s
That s on the last prompt is not a typo by the way: The next prompt after the hosts listing looks exactly like that.
I can easily reproduce the behavior.
So far, I have looked into the keyboard shortcuts of KDE Plasma and the terminal emulators, but there is no action bound to Super_L+Space.
My .bashrc contains nothing related either.
If I press the combination in any other window but a terminal emulator, nothing happens.
The bash history does not even contain an entry about the event.
What is happening here?
|
It's Konsole-specific feature introduced in commit 5ba34471
back in 2012.
Reference: this Super User
question
A short summary of the mechanism as originally described in the linked Super User question is as follows:
Super_L+letter in Konsole sends ^X@s<letter>.
The first part is "translated" into "possible-hostname-completions" by readline key bindings.
(bind -p shows "\C-x@": possible-hostname-completions). The remaining part ("s<letter>") will be printed on the next prompt.
| Why is bash listing the effective content of /etc/hosts if I press Super_L+Space? |
1,396,511,032,000 |
We have a Debian 8.2 system foo provided by our IT department for production. Its /etc/hosts file contains these two lines:
127.0.0.1 localhost
127.0.1.1 foo.example.com foo
This maps the FQDN of the system to 127.0.1.1, while the real IP address of the system is 10.5.1.38 (which is not given in hosts).
Is this correct or should /etc/hosts not contain the FQDN?
Note that the system is networked, has access to DNS and nslookup with the FQDN gives the correct IP address (10.5.1.38).
|
I regard this as a bad practice, I have seen developers doing that. While it can be used in testing environments, I do not recommend using it in production environments.
By definition, the kernel has a very defined behaviour for the localhost.
There could be also problems too, I do remember having a service opened to the Internet that was not working, because the developer used the name in a config file, that was pointing to the loopback instead of the public address.
I my opinion your hosts files should be:
127.0.0.1 localhost
10.5.1.38 foo.example.com foo
| Should /etc/hosts contain an entry for the FQDN that maps to a loopback address? [closed] |
1,396,511,032,000 |
I'm running a CentOS server (7.0) and I'd like to login via sshd as a user, not root. So I set PermitRootLogin no in the config file and su - after login. I've received lots of hacking activities and I decided to allow only one user to login via sshd. Since the username is not my real name or any common name, I think it would be good enough. Let's say it's 'hkbjhsqj'.
I've tried both ways introduced on nixCraft: AllowUsers in sshd_config or pam_listfile.so in PAM. The only problem to me is that anyone else still has chances to type in passwords and that leaves records in /var/log/secure. I assume these actions consumes my server's resources to run password checking and other stuff.
Let's say I try to login with the username 'admin':
www$ ssh [email protected]
[email protected]'s password:
Permission denied, please try again.
[email protected]'s password:
Permission denied, please try again.
[email protected]'s password:
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
and in the secure log:
Aug 8 08:28:40 www sshd[30497]: pam_unix(sshd:auth): check pass; user unknown
Aug 8 08:28:40 www sshd[30497]: pam_listfile(sshd:auth): Refused user admin for service sshd
Aug 8 08:28:43 www sshd[30497]: Failed password for invalid user admin from 192.168.0.1 port 52382 ssh2
Aug 8 08:28:47 www sshd[30497]: pam_unix(sshd:auth): check pass; user unknown
Aug 8 08:28:47 www sshd[30497]: pam_listfile(sshd:auth): Refused user admin for service sshd
Aug 8 08:28:50 www sshd[30497]: Failed password for invalid user admin from 192.168.0.1 port 52382 ssh2
Aug 8 08:28:52 www sshd[30497]: pam_unix(sshd:auth): check pass; user unknown
Aug 8 08:28:52 www sshd[30497]: pam_listfile(sshd:auth): Refused user admin for service sshd
Aug 8 08:28:55 www sshd[30497]: Failed password for invalid user admin from 192.168.0.1 port 52382 ssh2
Aug 8 08:28:55 www sshd[30497]: Connection closed by 192.168.0.1 [preauth]
Aug 8 08:28:55 www sshd[30497]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.0.1
While all this will not happen if I add the IP in /etc/hosts.deny:
www$ ssh [email protected]
ssh_exchange_identification: Connection closed by remote host
and in the secure log:
Aug 8 08:35:11 www sshd[30629]: refused connect from 192.168.0.1 (192.168.0.1)
Aug 8 08:35:30 www sshd[30638]: refused connect from 192.168.0.1 (192.168.0.1)
So my question would be, is there a way I can refuse all irrelevant users' ssh requests from anywhere without password checking, like I put them in the hosts.deny list? But at the same time I do need allow all ssh requests with the username 'hkbjhsqj' from anywhere and check the password then.
|
I don't think it is possible to do what you are asking. If you could, someone could "brute force" to find valid usernames on your server. I am also pretty sure that the username and the password are sent simultaneously by the client, you could verify this by capturing packets using Wireshark on an unencrypted SSH connection.
By "hacking activities" I assume you are talking about brute force attempts at passwords. There are many ways to protect yourself from this, I will explain the most common ways.
Disable root login
By denying root login with SSH the attacker has to know or guess a valid username. Most automated brute force attacks only try logging in as root.
Blocking IPs on authentication failure
Daemons like fail2ban and sshguard monitor your log files to detect login failures. You can configure these to block the IP address trying to log in after a number of failed login attempts. In your case this is what I would recommend. This reduces log spam and strain on your server, as all packets from this IP would be blocked before they reach the sshd daemon.
Your could, for example, set fail2ban to block IPs with 3 login failures in the last 5 minutes for 60 minutes. You will in the worst cases see three failed logins in your log every 60 minutes, assuming the attacker does not give up and move on.
Public key authentication
You can disable password authentication entirely and only allow clients with specific keys. This is often considered to be the most secure solution (assuming the client keeps his key safe and encrypted). To disable password authentication, add your public key to ~/.ssh/authorized_keys on the server and set PasswordAuthentication to no in sshd_config. There are numerous tutorials and tools to assist with this.
| allow only specific users to login via sshd, but refuse connect to non-listed users |
1,396,511,032,000 |
I am setting up a RHEL-based server that is associated with dynamic DNS from DynDNS, with a domain of, say, "abc.dyndns.org" that is dynamically updated with the server's IP address.
I have read that in order to ensure access to your server's services, you need to have at least the following in your /etc/hosts:
127.0.0.1 localhost.localdomain localhost
xxx.xxx.xxx.xxx redhatbox.yourcompany.com redhatbox
Where "xxx.xxx.xxx.xxx" is whatever IP address your server has, and "redhatbox" would be the name of the computer. So here are my questions:
(1) Because my server has an IP that is dynamically assigned by my ISP's DHCP, there is no one IP I can put in place of xxx.xxx.xxx.xxx, what should I do in this case?
(2) Should I simply replace "redhatbox.yourcompany.com" with my DynDNS domain "abc.dyndns.org"? And replace the "redhatbox" alias with "abc"?
If anyone can explain all this for a novice like me that would be great. Thank you very much for your detailed answers and patience.
|
Some context:
When a program asks your machine to resolve a hostname into a IP address it looks into your /etc/hosts and, if not found, it then makes a DNS query.
You don't need to keep a non-loopback IP address on it. You can just usually keep the localhost entries and an alias.
See, that's my /etc/hosts contents:
[braga@coleman ~]$ cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
127.0.0.1 coleman.jazz coleman
::1 localhost6.localdomain6 localhost
coleman.jazz or coleman (named for the musician, Ornette Coleman) is just an alias for my machine.
Direct answers:
Just leave it out.
You can replace it wherever you want to. it's just an alias. You can even replace it with www.google.com (and www.google.com on your machine will point up to your own machine).
| Hosts file on server with dynamic DNS? |
1,396,511,032,000 |
I'm having trouble using the r-commands between my servers to do a tape backup. I've been making changes in the .rhosts file, but I'm not sure if the OS reads that file everytime it's called, or just when it's booted, or some other time.
After I modify the .rhosts file, do I need to do anything to then have it be used with the next r-command?
|
The .rhosts file is read by the daemon (rshd or rlogind or sshd) at each login attempt. (Since ordinary users can edit their .rhosts file whenever they want, it wouldn't make sense to require root to restart the daemon.)
Make sure that the .rhosts file doesn't contain Windows line ending, and that it has proper permissions: it should not writable by anyone other than you (chmod 644 ~/.rhosts or chmod 600 ~/.rhosts). I don't remember whether the Tru64 implementation performs this check, but it's better to do it right anyway.
Also make sure that the file is a proper text file, with a newline at the end (all unix text files end with a newline, the newline is a line ending character and not a line separator). Make sure not to use Windows line endings, which would add an extra carriage return that unix doesn't treat as a newline.
Make sure that rshd isn't started with the -l flag, which would disable .rhosts processing.
The system logs on the server are where you'll find a clue as to what is happening. Tru64 keeps its logs under /var/adm by default (unless /etc/syslogd.conf has been modified).
| rhosts changes and re-initializing |
1,396,511,032,000 |
I have following line in my /etc/hosts.allow
sshd: 1.2.3.4 : spawn (echo `date` ALLOWED from %a >> /var/log/%d.log) &
The problem is, the date command prints time in the standard format, such as
Thu May 16 15:54:55 CEST 2013
which is complicated to process with my script. I would like to have date to specify my own format, such as date "+%F %T", to get following format:
2013-05-16 16:01:07
even if I escape the special characters (%), the following does not work:
sshd: 1.2.3.4 : spawn (echo `date "+\%F \%T` ALLOWED from %a >> /var/log/%d.log)
Could somebody please advise ?
|
Double the percent sign, and it should work:
sshd: 1.2.3.4 : spawn (echo `date "+%%F %%T"` ALLOWED from %a >> /var/log/%d.log) &
For more information, see the "% Expansion" section of the corresponding man page (hosts_access(5)).
| date format in /etc/hosts.allow |
1,396,511,032,000 |
Been googling this but found very ambiguous answers and I'm curious if I should modify these values, my VPS hosts file looks like this:
127.0.0.1 localhost
127.0.1.1 debian
144.17.4.xx porter.info porter
...
Everything works fine but I'm curious about the "127.0.1.1 debian" part, should I keep it or rename it?
|
The Debian Installer creates this entry for a system without a permanent IP address as a workaround for some software (e.g., GNOME)
For a system with a permanent IP address, that permanent IP address should be used here instead of 127.0.1.1
here is the complete document
| What is the purpose of the 127.0.1.1 entry in /etc/hosts? |
1,396,511,032,000 |
Is there some de-facto way for rewriting certain hostnames to other ones? Something like /etc/hosts for host to host instead of ip to host. Is this possible or should I create a local dns cname for that host?
Update regarding the comments
I want to use the local name that resolves to remote domain. The browser is just an example. I'm actually writing an ios app that requests resources from the internet, but I'd like to use local name for simulator-only runs. So to put it in another way I want my app to request http://localalias/, but that system would actually fetch http://remotehost.com/.
|
DNS CNAMEs would be the de-facto way to do this.
Edit: In light of comments below...
I don't think you'll be able to do what you are trying to do. You're trying to trick the browser or some other program into thinking something is an address it's not. The problem is that something is also going pass the name of the resource it wants so that the remote server knows what site to dish up. More than one site could be hosted on a given ip address. The browser sends the site it wants as part of it's request, just re-routing the traffic via a DNS hack is not going to be enough because the browser would be asking for a resource name that the remote site doesn't know anything about.
You will need to setup a full proxy system on your local system. It needs to either respond with standard browser headers to redirect you to the remote resource, or it needs to fetch the remote resource itself then pass through the data. This could be done with apache, squid, or any number of other proxy and http hosting solutions. If you give more details of your scenario we could be more specific.
| Local DNS rewrite from host to host for web requests |
1,396,511,032,000 |
I'd like to add shorthands for various FQDNs into the /etc/hosts file, such as:
pyrrha.compsci.university.org. pyrrha
Is there a way to force the local DNS resolver to process the /etc/hosts entries recursively?
Why? Inserting full FQDNs into every config file feels redundant. Creating an alias in a central location solves this and also adds a level of abstraction, which allows for a simple update of the FQDN behind the alias, should it ever change.
|
resolv.conf allows you to specify searchdomains. An entry like the following:
search cse.iitb.ac.in it.iitb.ac.in iitb.ac.in
Allows me to:
$ ping -c1 www
PING www.cse.iitb.ac.in (10.105.1.3) 56(84) bytes of data.
64 bytes from cse.iitb.ac.in (10.105.1.3): icmp_seq=1 ttl=64 time=0.803 ms
--- www.cse.iitb.ac.in ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.803/0.803/0.803/0.000 ms
$ ping -c1 cygnus
PING cygnus.it.iitb.ac.in (10.129.1.1) 56(84) bytes of data.
64 bytes from cygnus.it.iitb.ac.in (10.129.1.1): icmp_seq=1 ttl=61 time=0.688 ms
--- cygnus.it.iitb.ac.in ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.688/0.688/0.688/0.000 ms
resolv.conf doesn't apply on entries in /etc/hosts, but if your name is being resolved using a DNS server, then searchdomain might be what you're looking for.
| Adding recursive entries into `/etc/hosts` |
1,396,511,032,000 |
Assume I have a VM running on OSX with private IP 10.0.0.1 which can be accessed from the host machine.
I was wondering how can I map a pseudo domain, *.app.dev, to the private IP on my host machine so that on my host machine I can resolve the domain *.app.dev to 10.0.0.1.
The purpose of this setup is to have virtual environment for development and not pollute my host machine with unnecessary packages and services.
edit: I realize that /etc/hosts can accomplish non wildcard domain names, I should have been more clear and mention wildcard domain names.
|
You can do this with dnsmasq.
Dnsmasq is a very small DNS server usually used as a proxy. It offers a lot of ways to manipulate DNS lookups, one of which is to respond to all DNS queries for a domain with a single IP.
The example dnsmasq.conf file has specific example for this:
# Add domains which you want to force to an IP address here.
# The example below send any host in double-click.net to a local
# web-server.
#address=/double-click.net/127.0.0.1
The following 2 lines are all that you would need to get running
server=8.8.8.8
address=/app.dev/10.0.0.1
(You can change the server parameter to whatever upstream server you want. Or use resolv-file to use a resolv.conf file)
Then just configure your system to use 127.0.0.1 as a DNS server.
| Resolving pseudo domain name to private IP |
1,396,511,032,000 |
I would like all links and address bar entries in the pattern https://*.wikipedia.org/* to be redirected to https://*.0wikipedia.org/*
I tried to achieve this by adding these entries to /etc/hosts file, but it didn't work:
0wikipedia.org wikipedia.org
en.0wikipedia.org en.wikipedia.org
Any suggestions please?
|
Yes it is, but it's not the best way. I would look into browser extensions for redirecting requests.
Here is how to do it on the network layer:
You need to use a hosts file with one entry per subdomain (or a dns-server with wildcards), like this:
127.0.0.1 en.wikipedia.org
Then you run a webserver on localhost, which redirects all accesses to en.wikipedia.org to en.0wikipedia.org/ while preserving the path. This can be done for example with mod_rewrite for apache.
Direct redirect per hosts file is not working, as the hosts file has nothing to do with redirecting anything, but is just a basic form of name resolution mechanism. The solution above just uses this to fake that en.wikipedia.org is served from your computer and then redirects the access to your webserver to the target url.
Further you need to have a certificate on the server as the urls use https, which needs to have an security exception in your browser, because it is either self-signed or for another domain as you won't get a certificate for the wikipedia domain from any certificate authority which is in your browsers certificate store.
So while it is possible, it is quite complicated especially because of the https part.
Thanks to @TheFiddlerWins for pointing out the problem with https.
| Is it possible to redirect all wikipedia urls to wikipedia zero via hosts file? |
1,396,511,032,000 |
I was trying for few hours to make my custom script work when using hosts.allow/hosts.deny, to prevent connections to SSH and other services supporting TCP wrappers from unlisted countries.
Example with SSH:
hosts.deny file
sshd : ALL
hosts.allow file
sshd: ALL: spawn /usr/local/bin/country_filter %h
country_filter script:
#!/bin/bash
# Specify the two-letter ISO Country Code(s) to accept
ALLOW_COUNTRIES="RU\|CY" # list of country codes in the exampled format ("RU\|GR\|CY")
COUNTRY=`/usr/bin/geoiplookup $1 | /bin/grep -w $ALLOW_COUNTRIES`
[[ $COUNTRY ]] && RESPONSE="ALLOW" || RESPONSE="DENY"
if [ $RESPONSE = "ALLOW" ]
then
echo "$RESPONSE"
exit 0
else
echo "$RESPONSE"
exit 1
fi
The script above works great from console but I could not make it work, when using hosts.allow.
What am I missing here?
|
As documented in the hosts_options(5) man page, the standard output is redirected to /dev/null, so that there's no chance for you to get the output from echo. And as you want the exit status to be taken into account, you should use aclexec instead of spawn. Indeed the man page says for aclexec: "The connection will be allowed or refused depending on whether the command returns a true or false exit status."
| Reject SSH connections from unlisted countries, using hosts.allow/hosts.deny on CentOS |
1,396,511,032,000 |
tl;dr: accessing 0.0.0.0:port (eg. curl http://0.0.0.0:443) gets redirected(internally) to 127.0.0.1:port (where port is any port number) (eg. the previous curl command is the same as curl http://127.0.0.1:443); why does this happen and how to block connections destined to 0.0.0.0 ?
UPDATE2: I've found a way to block it by patching the Linux kernel (version 6.0.9):
--- .orig/usr/src/linux/net/ipv4/route.c
+++ /usr/src/linux/net/ipv4/route.c
@@ -2740,14 +2740,17 @@ struct rtable *ip_route_output_key_hash_
}
if (!fl4->daddr) {
- fl4->daddr = fl4->saddr;
+ rth = ERR_PTR(-ENETUNREACH);
+ goto out;
+ /* commenting out the rest:
+ fl4->daddr = fl4->saddr; // if you did specify src address and dest is 0.0.0.0 then set dest=src addr
if (!fl4->daddr)
- fl4->daddr = fl4->saddr = htonl(INADDR_LOOPBACK);
+ fl4->daddr = fl4->saddr = htonl(INADDR_LOOPBACK); // if you didn't specify source address and dest address is 0.0.0.0 then make them both 127.0.0.1
dev_out = net->loopback_dev;
fl4->flowi4_oif = LOOPBACK_IFINDEX;
res->type = RTN_LOCAL;
flags |= RTCF_LOCAL;
- goto make_route;
+ goto make_route; END of COMMENTed out block */
}
err = fib_lookup(net, fl4, res, 0);
Result:
Where do packets sent to IP 0.0.0.0 go?:
$ ip route get 0.0.0.0
RTNETLINK answers: Network is unreachable
...they don't!
A client attempts to connect from 127.1.2.18:5000 to 0.0.0.0:80
$ nc -n -s 127.1.2.18 -p 5000 -vvvvvvvv -- 0.0.0.0 80
(UNKNOWN) [0.0.0.0] 80 (http) : Network is unreachable
sent 0, rcvd 0
(if you didn't apply kernel patch, you will need a server like the following for the above client to be able to successfully connect: (as root, in bash)while true; do nc -n -l -p 80 -s 127.1.2.18 -vvvvvvvv -- 127.1.2.18 5000; echo "------------------$(date)";sleep 1; done)
Patched ping(ie. a ping that doesn't set destination address to be the same as the source address when destination address is 0.0.0.0, ie. comment out the 2 lines under // special case for 0 dst address that you see here):
$ ping -c1 0.0.0.0
ping: connect: Network is unreachable
instant.
However, if specifying source address, it takes a timeout(of 10 sec) until it finishes:
$ ping -I 127.1.2.3 -c1 -- 0.0.0.0
PING 0.0.0.0 (0.0.0.0) from 127.1.2.3 : 56(84) bytes of data.
--- 0.0.0.0 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms
UPDATE1:
The why part is explained here but I'm expecting a little bit more details as to why does this happen, for example(thanks to user with nickname anyone on liberachat #kernel channel):
$ ip route get 0.0.0.0
local 0.0.0.0 dev lo src 127.0.0.1 uid 1000
cache <local>
This shows that somehow packets destined for 0.0.0.0 get routed to the localhost interface lo and they get source ip 127.0.0.1 (if I'm interpreting this right) and because that route doesn't appear in this list:
$ ip route list table local
local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1
local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1
broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1
local 169.254.6.5 dev em1 proto kernel scope host src 169.254.6.5
broadcast 169.254.6.255 dev em1 proto kernel scope link src 169.254.6.5
local 192.168.0.17 dev em1 proto kernel scope host src 192.168.0.17
broadcast 192.168.255.255 dev em1 proto kernel scope link src 192.168.0.17
it means that it must be somehow internal to the Linux kernel. ie. hardcoded
To give you an idea, here's how it looks for an IP that's on the internet (I used quad1 as an example IP):
$ ip route get 1.1.1.1
1.1.1.1 via 192.168.1.1 dev em1 src 192.168.0.17 uid 1000
cache
where 192.168.1.1 is my gateway, ie.:
$ ip route
default via 192.168.1.1 dev em1 metric 2
169.254.6.0/24 dev em1 proto kernel scope link src 169.254.6.5
192.168.0.0/16 dev em1 proto kernel scope link src 192.168.0.17
Because iptables cannot be used to sense (and thus block/drop) such connections destined to 0.0.0.0 that get somehow routed to 127.0.0.1, it might prove difficult to find a way to block them... but I'll definitely try to find a way, unless someone already knows one.
@Stephen Kitt (in the comments) suggested a way to block hostnames that reside in /etc/hosts, so instead of:
0.0.0.0 someblockedhostname
you can have
127.1.2.3 someblockedhostname
127.1.2.3 someOTHERblockedhostname
(anything other than 127.0.0.1, but you can use the same IP for every blocked hostname, unless you want to differentiate)
which IP you can then block using iptables.
However if your DNS resolver (ie. NextDNS, or 1.1.1.3) returns 0.0.0.0 for blocked hostnames (instead of NXDOMAIN) then you cannot do this (unless, of course, you want to add each host manually in /etc/hosts, because /etc/hosts takes precedence - assuming you didn't change the line hosts: files dns from /etc/nsswitch.conf)
OLD: (though edited)
On Linux (I tried Gentoo and Pop OS!, latest) if you have this line in /etc/hosts:
0.0.0.0 somehosthere
and you run this as root (to emulate a localhost server listening on port 443)
# nc -l -p 443 -s 127.0.0.1
then you go into your browser (Firefox and Chrome/Chromium tested) and put this in address bar:
https://somehosthere
or
0.0.0.0:443
or
https://0.0.0.0
then the terminal where you started nc(aka netcat) shows a connection attempt (some garbage text including the plaintext somehosthere if you used it in the url)
or instead of the browser, you can try:
curl https://somehosthere
or if you want to see the plaintext request:
curl http://somehosthere:443
This doesn't seem to be mitigable even when using dnsmasq as long as that 0.0.0.0 somehosthere is in /etc/hosts, but when using dnsmasq and your DNS resolver (ie. NextDNS or Cloudflare's 1.1.1.3) returns 0.0.0.0 instead of NXDOMAIN (true at the time of this writing) and that hostname isn't in your /etc/hosts(AND in what you told dnsmasq is the /etc/hosts to use) then there are two ways to mitigate it(either or both will work):
use dnsmasq arg --stop-dns-rebind
--stop-dns-rebind
Reject (and log) addresses from upstream nameservers which are in
the private ranges. This blocks an attack where a browser behind
a firewall is used to probe machines on the local network. For
IPv6, the private range covers the IPv4-mapped addresses in pri‐
vate space plus all link-local (LL) and site-local (ULA) ad‐
dresses.
use line bogus-nxdomain=0.0.0.0 in /etc/dnsmasq.conf which makes dnsmasq itself return NXDOMAIN for any hostname that resolved to 0.0.0.0 (except, once again, if that hostname was in /etc/hosts (bypasses dnsmasq) and what you told dnsmasq to use as /etc/hosts (if you did))
So, the second part of this question is how to disallow accesses to 0.0.0.0 from being redirected to 127.0.0.1 ? I want this because when using NextDNS (or cloudflare's 1.1.1.3) as DNS resolver, it returns 0.0.0.0 for blocked hostnames, instead of NXDOMAIN, thus when loading webpages, parts of them(that are located on blocked hostnames) will try to access my localhost server running on port 443 (if any) and load pages from it instead of just being blocked.
Relevant browser-specific public issues being aware of this(that 0.0.0.0 maps to 127.0.0.1):
Chrome/Chromium: https://bugs.chromium.org/p/chromium/issues/detail?id=1300021
Firefox: https://bugzilla.mozilla.org/show_bug.cgi?id=1672528#c17
|
Why this happens is explained in Connecting to IP 0.0.0.0 succeeds. How? Why? — in short, packets with no destination address (0.0.0.0) have their source address copied into their destination address, and packets with no source or destination have their source and destination addresses set to the loopback address (INADDR_LOOPBACK, 127.0.0.1); the resulting packet is sent out on the loopback interface.
As you determined, this behaviour is hard-coded in the Linux kernel’s IPv4 networking stack, and the only way to change it is to patch the kernel:
diff --git a/net/ipv4/route.c b/net/ipv4/route.c
index 795cbe1de912..df15a685f04c 100644
--- a/net/ipv4/route.c
+++ b/net/ipv4/route.c
@@ -2740,14 +2740,8 @@ struct rtable *ip_route_output_key_hash_rcu(struct net *net, struct flowi4 *fl4,
}
if (!fl4->daddr) {
- fl4->daddr = fl4->saddr;
- if (!fl4->daddr)
- fl4->daddr = fl4->saddr = htonl(INADDR_LOOPBACK);
- dev_out = net->loopback_dev;
- fl4->flowi4_oif = LOOPBACK_IFINDEX;
- res->type = RTN_LOCAL;
- flags |= RTCF_LOCAL;
- goto make_route;
+ rth = ERR_PTR(-ENETUNREACH);
+ goto out;
}
err = fib_lookup(net, fl4, res, 0);
This patch shows the original implementation, explaining the “why?” part above: if the packet has no destination address (i.e. it’s 0.0.0.0):
the source address is copied to the destination address
if the packet still has no destination address, i.e. it also has no source address, both addresses are set to the loopback address (127.0.0.1);
in all cases, the outgoing device is set to the loopback device, and the route is constructed accordingly.
The patch changes this behaviour to return a “network unreachable” error instead.
| Why accessing 0.0.0.0:443 gets redirected to 127.0.0.1:443 on Linux and how to disallow it? |
1,396,511,032,000 |
To clean out my custom hosts file and remove dead domains on Windows I ping the hosts from domains.txt, and if I get a reply I add that host to result.txt:
@echo off
>result.txt (
for /f %%i in (domain.txt) do ping -n 1 %%i >nul && echo 127.0.0.1 %%i
)
Can anyone help me to implement the same functionality via a Linux shell script?
|
Funceble https://github.com/funilrys/funceble is the best tool which i recommend using today.
$ ./funceble -f domain.txt
Domain Status Expiration Date Source Analyse Date
------------------------------------------- ----------- ----------------- ---------- --------------------
google.com ACTIVE 14-sep-2020 WHOIS Tue Sep 25, 06:30:43
stackexchange.com ACTIVE 12-jun-2018 WHOIS Tue Sep 25, 06:30:44
zhnixomknxkm.com INACTIVE Unknown NSLOOKUP Tue Sep 25, 06:30:45
kvfjjyrlsphr.info INACTIVE Unknown NSLOOKUP Tue Sep 25, 06:30:46
adblockplus.org ACTIVE 09-jun-2018 WHOIS Tue Sep 25, 06:30:47
Status Percentage Numbers
----------- ------------ -------------
ACTIVE 60% 3
INACTIVE 40% 2
INVALID 0% 0
| Ping a host, check if alive or not, and send result to file via a shell script |
1,396,511,032,000 |
What if one host is defined in /etc/hosts like:
192.168.0.100 server
and one is defined in ~/.ssh/config like:
Host server
HostName 192.168.0.101
and you ssh into server: ssh server.
How would such a conflict be resolved? I guess one has higher priority than the other.
|
If you do ssh server the server part could be a real host name or some ssh internal "nickname". ssh first looks for some nickname in .ssh/config, if it finds a configuration there it will use this. If it does not find a configuration it assumes a real hostname and tries to resolve it via /etc/host and dns.
| What will be prioritized in a conflict between ~/.ssh/config hostname and /etc/hosts? |
1,396,511,032,000 |
I have a text file, modifyhostslist.txt, which contains entries that correspond to entries found in my hosts file. Not every entry in my hosts file needs to be modified, only entries also found in modifyhostslist.txt.
The entries found in modifyhostslist.txt are to be commented out in the hosts file.
Sample line (entry) found in modifyhostslist.txt: 127.0.0.1 www.domain.com
The following serves as the comment out sequence: #%%#
I've attempted to use sed to complete the task, but so far I've been unsuccessful. Here's my most recent stab at it:
while read line; do
sed -i 's/'"$line"'/#%%#'"$line"'/' /system/etc/hosts;
done < modifyhostslist.txt
In addition, the #%%# comments will be removed at specific intervals thereby returning the hosts file to its original condition. I suspect simply rearranging the command which is used to insert the comments can also be used to remove the comments in the hosts file?
It seems the awk command might work, but I'm unsure of how to use it as well.
|
You used the command:
while read line; do
sed -i 's/'"$line"'/#%%#'"$line"'/' /system/etc/hosts;
done < modifyhostslist.txt
As long as the lines in modifyhostslist.txt match the lines in /system/etc/hosts, that command really should work.
If the lines look identical to the eye but the command still does not work, the cause might be a mismatch between the (invisible) line-endings. DOS/Windows files have two-character line-endings while Unix and Mac use one-character line-endings. If this is the problem, the solution is to remove the offending characters. Since hosts is a Unix system file, I expect that it has the correct line-endings and we thus need to remove the surplus \r characters from the modifyhostslist.txt file. This can be done as follows:
while read line; do
sed -i 's/'"$(echo "$line" | tr -d '\r')"'/#%%#'"$line"'/' /system/etc/hosts;
done < modifyhostslist.txt
| Sed to Modfiy Hosts File |
1,396,511,032,000 |
I know I can associate hostname with my IP address in /etc/hosts:
1.2.3.4 foo
and then, for example in tcpdump output, I will see foo instead of my IP address (if -n was not used)
Anyways, can I temporarily add such IP -> hostname mapping on the commandline, without actually editing the file?
Lets suppose I connect to a wifi, and get some random IP. I want this IP to be resolved into my hostname in tcpdump for this current session (without adding an entry permanently into /etc/hosts.
UPDATE:
In case it was not clear from my question, and the title, I am looking for a solution how to do this without modifying /etc/hosts.
In a same way as I can use the command hostname to set the hostname for current session (ie until next restart) without having to edit /etc/hostname, I am looking for setting up reverse lookup for current session (ie until next restart) without having to modify /etc/hosts.
|
The hostname resolution service of the C library (which is used by almost all software that needs hostname resolution) is controlled primarily by the hosts: line in the /etc/nsswitch.conf file. Each keyword on that line causes the corresponding libnss_*.so library to be loaded, and those libraries will ultimately handle the hostname resolution requests from the applications.
If your distribution includes a package called nss-myhostname (or any other package that will provide a libnss_myhostname.so.* library), then you could install that package, add myhostname to the hosts: line of nsswitch.conf, and then that library will automatically associate the locally configured system hostname with any and all IP addresses configured to network interfaces on your system. With this configuration, you would not need to run any commands to update the association: it will all happen automatically. It will also associate your local hostname with IP address 127.0.0.2 (and IPv6 address ::1) if you have no IP addresses configured on your system at all.
If the hosts: line of your nsswitch.conf includes the keyword resolve or your /etc/resolv.conf has a line nameserver 127.0.0.53, then you are using systemd-resolved as your DNS resolver. It can provide similar automatic association for your local hostname to any locally configured IP addresses as libnss_myhostname (see above). If this is your case, see man systemd-resolved and read the chapter titled SYNTHETIC RECORDS. It will also include optional mDNS (see below) and LLMNR (link-local multicast name resolver/responder) functionality, which might also provide local hostname resolution in a roundabout way.
If the hosts: line of your nsswitch.conf includes a hostname resolution service that can use multicast-DNS (mDNS) like mdns4_minimal and your system includes a mDNS responder (e.g. avahi-daemon), it might enable automatic resolving of <hostname>.local to the local system's IP addresses, and vice versa.
If the hosts: line of your nsswitch.conf includes any other options (or your distribution offers other libnss_*.so libraries), you will have to investigate the functionalities offered by them on your own, since you did not specify your distribution.
If the hosts: line of your nsswitch.conf includes only the classic files and dns keywords, or you need to assign names to IP addresses that are not currently configured to any of the local network interfaces (e.g. names for IP addresses of other hosts), then see the methods in Marcus Müller's answer.
If none of these answers are suitable to you, then I'm afraid the answer will be "No, there is no functionality like you're asking for. But if you have programming skills, there would be nothing stopping you from implementing it yourself" - and the list of possible methods above and in Marcus Müller's answer should give you ideas on the interfaces you could use to plug your own solution into.
| associate IP with hostname without editing /etc/hosts |
1,396,511,032,000 |
I puchased a domain, say fireworks.com, and I would like to call my server ubuntu-18-04. How am I expected to edit /etc/hosts?
Is it possible to add multiple aliases as follows?
127.0.0.1 localhost
127.0.1.1 ubuntu-18-04.fireworks.com fireworks.com ubuntu-18-04
5.247.221.66 ubuntu-18-04.fireworks.com fireworks.com ubuntu-18-04
Usually in documentation the /etc/hosts format has only three records:
1. An IP address
2. A fqdn
3. The hostname
Is it possible (and necessary?) to add a fourth record, as in my previous example, including fireworks.com? I would like to receive mail as [email protected] other than as [email protected]
|
From man hosts:
This manual page describes the format of the /etc/hosts file. This
file is a simple text file that associates IP addresses with
hostnames, one line per IP address. For each host a single line
should be present with the following information:
IP_address canonical_hostname [aliases...]
Yes, you can add multiple lines of IPs and aliases. But for your mail reception, I'd suggest you use DNS for domain name mapping. Use dnsmasq to make it easier, it resolves from /etc/hosts too
| /etc/hosts and aliases |
1,396,511,032,000 |
I am referencing the following question because it's similar but not the same:
hostname -i returns strange result in linux
On my CentOS 7 system, I get a strange IP address from "hostname -i" after I change my hostname, and I am trying to figure out why this is the case.
I change the hostname with following command:
# hostnamectl set-hostname saturn
# systemctl restart systemd-hostnamed
My /etc/hosts file shows:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
The following is in my /etc/nsswistch.conf file:
hosts: files dns myhostname
My server IP address is 192.168.1.13, but "hostname -i" returns strange a IP address:
# hostname -i
92.242.140.21
However, "hostname -I" is fine:
# hostname -I
192.168.1.13
Why does hostname -i return 92.242.140.21. Is it a random dynamic IP assigned to my system by the DNS? Can someone explain? Thanks!
|
Unlike the hostname -I command, which just lists all configured IP addresses on all network interfaces, the hostname -i command performs name resolution (see the hostname man page).
Since your newly assigned hostname cannot be resolved using the /etc/hosts file, running hostname -i will cause your system's name resolver to generate a DNS query to an external DNS server. At this server (which I presume belongs to your ISP), this query comes up empty (NXDOMAIN result: i.e. non-existent domain). Because your ISP has partnered up with Barefruit, rather than receiving the NXDOMAIN result, you receive a Barefruit IP address in response to your query:
$ dig +short -x 92.242.140.21
unallocated.barefruit.co.uk.
I imagine that adding your new hostname to your /etc/hosts file will make the weird Barefruit IP address disappear when you run the hostname -i command. If not, you may disregard this answer : )
Just for the fun of it: using the dig command, you can interrogate different name servers. To see the difference in response, you could run the following two commands:
$ dig saturn
$ dig @8.8.8.8 saturn
The first causes name resolution via your system's preconfigured DNS server, and likely results in a Barefruit IP address being returned. The second command asks Google Public DNS to resolve the name, and returns with an NXDOMAIN status. Or not?
If so, your ISP may be involved in the dubious practice of DNS hijacking, and you may want to figure out if there is an opt-out possibility, or change your DNS service provider.
| Understanding why hostname -i returns strange IP address |
1,556,108,912,000 |
I need to know if there is a web browser using its own "hosts" file or simply ignoring the "hosts" file from OS.
UPDATE 1: This is the situation:
certain program requires certain URL for validation : not wanted
I need that URL for getting help : wanted
I tested with Firefox, Chromium, Vivaldi, Midori, and all of them read hosts file.
|
I do not know if there is browser based utility to do this. But you can do it for the entire OS. The key file is /etc/nsswitch.conf. To support only DNS records you should have record inside like:
hosts: DNS
if you need to have /etc/hosts and the file have precedence over DNS you need to have record like:
hosts: files DNS
EDIT: You can set Firefox to use DNS over https. Here you can find detailed instructions:
Type about:config in the Firefox address bar and then press enter. When Firefox asks, click on the button stating that you accept the
risks.
In the search field enter network.trr to display all of the settings for Firefox's Trusted Recursive Resolver, which is the
DNS-over-HTTPS Endpoint used by Firefox.
Double-click on network.trr.mode, enter 2 in the field, and press OK as shown below. This turns on DoH in Firefox.
Next you need to make sure the network.trr.uri is set to https://mozilla.cloudflare-dns.com/dns-query as this is Cloudflare's
DoH DNS resolver that Firefox has partnered with for the test. If it
is not set to this URL, please double-click on the setting and enter
the URL.
You can now close the about:config page.
| How to ignore hosts file? [closed] |
1,556,108,912,000 |
I am trying to use /etc/hosts on my mac to block infamous scumbag sites like mackeeper.com and com-cleaner.systems from ever loading again in popups.
While doing that I've found these entries on my hosts file:
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
so I have added these...
127.0.0.1 mackeeper.com
127.0.0.1 www.mackeeper.com
127.0.0.1 mackeeperapp.zeobit.com
127.0.0.1 mackeeperapp2.mackeeper.com
127.0.0.1 *.mackeeper.com
127.0.0.1 activate.adobe.com
127.0.0.1 practivate.adobe.com
127.0.0.1 *.com-cleaner.systems
127.0.0.1 *.bet.pt
and all these sites continue to load fine even after a restart.
I have tried also adding the same lines with fe80::1%lo0 and ::1 without success.
Any ideas?
|
macOS has a DNS cache, and if the IP addresses of the problem sites are already in your DNS cache, editing /etc/hosts won't have an immediate effect.
The procedure for flushing the DNS cache is annoyingly version-dependent:
https://help.dreamhost.com/hc/en-us/articles/214981288-Flushing-your-DNS-cache-in-Mac-OS-X-and-Linux
10.4: lookupd -flushcache
10.5, 10.6: dscacheutil -flushcache
10.7, 10.8: sudo killall -HUP mDNSResponder
10.9: dscacheutil -flushcache; sudo killall -HUP mDNSResponder
10.10.1 .. 10.10.3: sudo discoveryutil udnsflushcaches
10.10.4+: sudo dscacheutil -flushcache; sudo killall -HUP mDNSResponder
11: sudo killall -HUP mDNSResponder
12+: sudo killall -HUP mDNSResponder; sudo killall mDNSResponderHelper; sudo dscacheutil -flushcache
| /etc/hosts not blocking anything |
1,556,108,912,000 |
I'm having an issue relating to host names and SSL signing. The certificate signing process works fine if my host name is puppet. With the IP of the puppet master server being set in /etc/hosts.
I don't want to use the IP as it will likely change and I'll have to update /etc/hosts again.
Instead I point directly to the url but this causes additional issues relating to inconsistency in certificate names.
Is there anyway to set a host to url? E.g. something like the following in /etc/hosts:
example.com puppet
|
Is there anyway to set a host to url? E.g. something like the following in /etc/hosts
That is not a URL. Its a hostname. The point of the /etc/hosts file is that it provides an alternative to DNS for resolving hostnames to addresses.
The files nameservice (the bit of code that sits between your application and the /etc/hosts file) does not support this.
If you have your own nameserver then you could setup a CNAME record pointing the name puppet to example,com
Alternatively you could write a script to capture the DNS address of example.com and append it to a template to create the hosts file (at boot up or at intervals).
Or just fix your certificates.
| Set a host name alias in /etc/hosts? |
1,556,108,912,000 |
From my host with hostname localhost, I would like to reach an external host with hostname exthost through the internet. The public IP address of exthost is dynamically assigned by the Internet Provider.
I would like to refer to exthost using always its name, and mapping this name to its actual IP address, according to the value assigned by the Provider, which may change.
To do so, I would like to use /etc/hosts, properly updating the IP address value of the line related to exthost.
For example, the line in /etc/hosts in Linux may look like:
<ip_address> exthost.domain exthost
I am using Ubuntu 18.04, but this surely applies also to other distros/Unix-like systems.
Is it possible to do this, avoiding to set up a DNS (and also a dynamic DNS, which sometimes is not a free service) just for this purpose?
For example, is it possible to use, instead of an explicit <ip_address>, a reference to another file containing only the desired IP as a string? So that this file can be accessed and modified by a user, according to the IP value, which may vary.
Note: this question seems not to be the same case, because it is about the local machine. I am instead referring to an external host.
|
There is no such thing as a user defined hosts file on Linux, you can use HOSTALIASES which works with canonical names.
If I got your question correctly you can use a dynamic DNS service like DynDNS or No-IP to always have a correct public IP address your ISP assigned.
You can then use HOSTSALIASES to map exthost to FQDN name provided by a dynamic DNS service.
Export HOSTALIASES value with export HOSTALIASES=~/.hosts and then add a following line to ~/.hosts file to map exthost to FDQN name provided by no-ip for example.
exthost yourname.no-ip.org
HOSTALIASES works only with cannonical names, not IPs, which is why you should use some dynamic DNS service to have FQDN name, but you can just use that FQDN, and skip the HOSTALIASES completely.
I don't think you can do this without setting up some DNS service, or scripting something on remote site to always send you their public IP and then change that IP in local /etc/hosts file.
Another option would be to either give user permission to change /etc/hosts or set that user in chroot environment, and give him his own /etc/hosts file in his chroot environment.
| Use reference to file instead of IP in /etc/hosts |
1,556,108,912,000 |
I've added a few hostnames in /etc/hosts to resolve to my LXD container:
$ less /etc/hosts
127.0.0.1 localhost
127.0.1.1 HOST
lemh 10.0.3.219
pma.lemh 10.0.3.219
wp.lemh 10.0.3.219
But ping, getent ahosts or Firefox cannot resolve them. I don't want to restart right now.
I've tried systemctl restart networking.service to no avail. Is there a way to resolve them without restarting my system?
|
in /etc/hosts you have to write:
ip alias
so the correct form is:
127.0.0.1 localhost
127.0.1.1 HOST
10.0.3.219 lemh pma.lemh wp.lemh
| Custom hostnames on /etc/hosts not resolved |
1,556,108,912,000 |
I need to use the a custom URL name which is accessible from all devices in a LAN.
I know that it can be set in the /etc/hosts file
127.0.0.1 myname
127.0.1.1 system09-System-Product-Name
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
While entering the myname hosts it is available to access myname/urlpath . It is accessible only for my system. How can I make this available systems in my LAN too?
|
Localhost as its name says can only be accessed from your local system.
If you need other users to access yout custom URL you need to map your system IP address with the name used and then add this entry on all your LAN workstations by editing theirs /etc/hosts files for example:
127.0.0.1 <custom_name> # this is for localhost
<your_ip_address> <custom_name> # this is to be added to other workstations in the LAN
Other solution is to use a DNS server in your local LAN and create an A record for your custom name that will allow other users in you LAN to access your link.
| How to change the name of localhost to a custom name which is available to other users in LAN |
1,556,108,912,000 |
I am using /etc/host to map localhost to a web domain. I would like a fast way of doing this rather than searching and replacing every time. I put this in my .bashrc file.
alias hostchange='
nowdir=$PWD;
cd /etc;
mv hosts hoststempname;
mv hostssecondary hosts;
mv hoststempname hostssecondary;
cd $nowdir'
I am getting this error and it won't progress regardless y or n.
mv: rename hosts to hoststempname: Permission denied
override rw-r--r-- root/wheel for hosts? (y/n [n])
I got it to work by adding sudo.
alias hostchange='
nowdir=$PWD;
cd /etc;
sudo mv hosts hoststempname;
sudo mv hostssecondary hosts;
sudo mv hoststempname hostssecondary;
cd $nowdir'
Is this legitimate, I'm taking a shot in the dark here?
|
You are on the right track! A couple comments about this. It is usually better practice to leave multi-line actions like this to functions. I'd probably write it like this:
change_etc_hosts_file() {
set -e # stop running if we encounter an error
sudo \mv -f /etc/hosts /etc/hoststempname
sudo \mv -f /etc/hostssecondary /etc/hosts
sudo \mv -f /etc/hoststempname /etc/hostssecondary
set +e
}
alias changehosts=change_etc_hosts_file
You'll notice I also used absolute paths instead of changing directory. This is usually a better idea (to use absolute paths). If you do want to keep using relatives then it is usually better to do that in a sub-shell so you don't have to handling changing directory back to $PWD (which if you abort will leave you in a weird state). To do it as a sub-shell it would look like this:
change_etc_hosts_file() {
( # use subshell
cd /etc
set -e # stop running if we encounter an error
sudo \rm -f hoststempname # the \ escapes aliases which might cause prompting
sudo \cp -f hosts hoststempname
sudo \cp -f hostssecondary hosts
sudo \cp -f hoststempname hostssecondary
)
}
The cd happens inside the () which is a new process so it won't affect your current working directory.
| Simple command that switches between two host files |
1,556,108,912,000 |
I'm a shell beginner and here's an example, I don't know how to implement it.
Any help, thanks in advance!
Step 1: Get the domain resolution A record via dig.
dig @8.8.8.8 liveproduseast.akamaized.net +short | tail -n1
Step 2: Form the obtained IP address and domain name into a line that looks like this.
23.1.236.106 liveproduseast.akamaized.net
Step 3: Add it to the last line of the /etc/hosts file.
127.0.0.1 localhost loopback
::1 localhost
23.1.236.106 liveproduseast.akamaized.net
Step 4: Set it up to automate the task and run it every 6 hours. When the parsing IP has changed, update it to the /etc/hosts file (replacing the previously added IP).
crontab -e
6 * * * * /root/test.sh 2>&1 > /dev/null
|
One way to do that is basically replacing the old ip with the new one:
$ cat /root/test.sh
#!/bin/sh
current_ip=$(awk '/liveproduseast.akamaized.net/ {print $1}' /etc/hosts)
new_ip=$(dig @8.8.8.8 liveproduseast.akamaized.net +short | tail -n1 | grep '^[.0-9]*$')
[[ -z $new_ip ]] && exit
if sed "s/$current_ip/$new_ip/" /etc/hosts > /tmp/etchosts; then
cat /tmp/etchosts > /etc/hosts
rm /tmp/etchosts
fi
On the sed part, if you're using GNU you can simply do:
sed -i "s/$current_ip/$new_ip/" /etc/hosts
Or if you have moreutils installed
sed "s/$current_ip/$new_ip/" /etc/hosts | sponge /etc/hosts
Explanation
grep '^[.0-9]*$' catches IP address, if there isn't one, then it outputs nothing.
awk '/liveproduseast.akamaized.net/ {print $1}' /etc/hosts
Find a line which contains exactly "liveproduseast.akamaized.net", then grab its first column, which is the IP.
sed "s/what to replace/replacement/" file
Replace the first occurrence of what you want to replace with the replacement value
And for notice, you cannot do:
sed "s/what to replace/replacement/" file > file
More details: https://stackoverflow.com/questions/6696842/how-can-i-use-a-file-in-a-command-and-redirect-output-to-the-same-file-without-t
| How do I write dig output to /etc/hosts file? |
1,556,108,912,000 |
Ippsec does a lot of hackthebox boxes walkthroughs and in many of them he edits the /etc/hosts file.
Sometimes he adds multiple hostnames to the same ip address and when he browses those hostnames he gets different webpages. Shouldn't all hostnames return the same webpage? Because my understanding(may be very wrong) is that the /etc/hosts file only links ip addresses to hostnames and so there is no need to do a DNS lookup. Can anybody help me?
Thanks!
|
Web servers see the host header (IE the website name) that the browser is attempting to contact. The host header is sent regardless of how the IP was resolved. A single web server can host multiple sites on a single IP and thus uses the host header to determine which site/content to respond with.
| Why adding hostnames to /etc/hosts entries change the website viewed |
1,556,108,912,000 |
My question is similar but opposite to to Telnetting the Local port not working but trying the ip working
For me, telnet to the local port works but trying with IP does not work :(
I am running pgbouncer on port 6432:
$ telnet 192.x.x.x 6432
Trying 192.x.x.x...
telnet: Unable to connect to remote host: Connection refused
I set listen_addr = *, but still using telnet with IP from another server is not working.
See http://lists.pgfoundry.org/pipermail/pgbouncer-general/2013-January/001097.html for the same scenario (but no useful answer).
The output of netstat -plnt is
tcp 0 0 127.0.0.1:6432 0.0.0.0:* LISTEN 19879/./pgbouncer
How can I fix this?
|
A quick Google shows that recommended safe configurations for pgbouncer often set up the listening port only on the loopback interface (localhost). Here is one example:
[pgbouncer]
listen_port = 5433
listen_addr = localhost
auth_type = any
logfile = pgbouncer.log
pidfile = pgbouncer.pid
The configuration documentation explains clearly how to change the addresses on which the service listens:
listen_addr
Specifies list of addresses, where to listen for TCP connections. You
may also use * meaning “listen on all addresses”. When not set, only
Unix socket connections are allowed.
Addresses can be specified numerically (IPv4/IPv6) or by name.
Default: not set
listen_port
Which port to listen on. Applies to both TCP and Unix sockets.
Default: 6432
Since you've now responded that you've already done this, I'll leave it here for the record, but make an additional suggestion below.
The follow-up posts on the mailing list to the one you referenced provide the answer. I'll quote it here:
User 1
I restarted using /etc/init.d/pgbouncer restart, which effectively
launches pgbouncer with -R for a online restart.
User 2
I suspect the -R is working too well for you - it reuses the old
listening socket, with means the bind address stays the same.
This preference is natural - you rarely change bind addres, but may
change other settings (or pgbouncer version).
You should just do proper stop/start, then it should take new address in use.
| Telnetting the local port working but trying with ip not working |
1,556,108,912,000 |
If you have a list of servers all linking to each other by static addresses, is there some kind of hosts.additional you can use as some kind of file based DNS?
ie The hosts file contains static information and the hosts.additional contains the address which change regularly.
|
No, there's only one /etc/hosts. You could rebuild it with a cron entry every so often, perhaps by downloading it from a central server which you can update. rsync will do the work for you. Of course, this assumes you have a good reason to avoid setting up DNS.
| Is there some kind kind of hosts.additional file for Linux |
1,556,108,912,000 |
I have my inventory file configured with group variables. Example:
all:
children:
europe:
vars:
network_id: 3
network_name: "europe-eu"
hosts:
europe-eu-1254:
ansible_host: 8.8.8.8
ansible_ssh_pass: password
ansible_ssh_user: user
...
I would like to get the group variables in my tasks, but I have no idea how.
Example of the task:
- name: Start latest container
docker_container:
name: "server-{{ hostvars[inventory_hostname].vars.network_name }}"
image: "{{ docker_registry }}:{{ docker_tag }}"
state: started
recreate: yes
network_mode: host
oom_killer: no
restart_policy: always
become: yes
...
I assume, that {{ hostvars[inventory_hostname].vars.network_name }} is not the right way.
|
Simply reference the variables. For example the playbook
shell> cat playbook.yml
- hosts: all
tasks:
- debug:
var: network_name
- debug:
msg: "{{ network_name }}"
gives (abridged)
shell> ansible-playbook playbook.yml
ok: [europe-eu-1254] => {
"network_name": "europe-eu"
}
ok: [europe-eu-1254] => {
"msg": "europe-eu"
}
| Ansible - Access group variables in tasks |
1,556,108,912,000 |
I have the following entry in the hosts file:
127.0.0.1 postgres
This works most of the time:
[root@l25 log]# ping postgres
PING postgres (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.031 ms
64 bytes from localhost (127.0.0.1): icmp_seq=2 ttl=64 time=0.020 ms
Sometimes, apparently at random, a number of my services report they can't resolve it:
Failed to submit event: could not translate host name "postgres" to address: System error
In fact they can't resolve any other hosts as well:
Unable to record event with remote Sentry server (Errno::EBUSY - Failed to open TCP connection to xxx.ingest.sentry.io:443 (Device or resource busy - getaddrinfo)):
Rebooting the machine solves the problem for some time, then it starts showing up again.
What could be the cause of this?
|
This was caused by having only 4096 inotify handlers. I've increased the limits and the issue is gone.
fs.file-max = 131070
fs.inotify.max_user_watches = 65536
| Unable to resolve entry in hosts file? |
1,556,108,912,000 |
Hello Unix StackExchange,
I've recently started using StevenBlack's /etc/hosts file, and it seems to have broken some apps I've been using. So far I've noticed that Thunderbird can't log into GMail accounts using Oauth2 anymore, and the BBC Weather widget broke as well. It's very clear to me that these have been caused by the /etc/hosts change from the time the errors occurred at and the characteristic error messages stating that a certain server could not be accessed.
I've tried to look for (Google and BBC) related servers in the hosts file, however, commenting out some lines didn't fix things, unfortunately. It would be of much use to me if there was a way to know which server/website exactly an application is failing to connect to.
Any advice is much appreciated.
|
"It would be of much use to me if there was a way to know which server/website exactly an application is failing to connect to."
Unfortunately we can't guess, so you'll have to find out for yourself. What you'd want to do is see where the packets go. So, open up Wireshark or an alternative and attempt to access a website or application that doesn't function correctly. Then, look where those packets are going and comment out the incorrect entry.
To stop using that custom /etc/hosts file, you could temporarily replace it with a default version:
mv /etc/hosts /etc/hosts.bak
echo "127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4" > /etc/hosts
| How to see which website an app is trying to access? |
1,556,108,912,000 |
I have deployed a CentOS 6.6 VM (hortonwoks sandbox VM) on my Windows 7 OS. The VM publishes apache spark applications through a http url. The url's have the hostname as "sandbox.hortonworks.com" (e.g.http://sandbox.hortonworks.com:8088/proxy/application_1430918431488_0001/). All the port forwarding is set up and is working as expected. I can access the url using http://localhost:8088/proxy/application_1430918431488_0001/.
To access the url as is
(i.e http://sandbox.hortonworks.com:8088/proxy/application_1430918431488_0001/) I would have to add an entry to the /etc/hosts file as below
127.0.0.1 localhost sandbox.hortonworks.com
I tried adding this and restarted the "network" service, but the url doesn't work. I get an error
This webpage is not available
ERR_NAME_NOT_RESOLVED
The complete file contents of /etc/hosts file are as below
127.0.0.1 localhost.localdomain localhost
10.0.2.15 sandbox.hortonworks.com sandbox ambari.hortonworks.com
127.0.0.1 localhost sandbox.hortonworks.com
What changes do I need to make to get the url working?
Thanks!
|
The address 10.0.2.15 is a internal address for your VM which can not be reached from the Host OS because of the NAT mode.
You need to change your network adapter to use bridged mode or host-only-adapter.
In bridged mode the guest will try to get an address in the host network and using host-only-adapter you will create an interface on the host OS, normally with address 192.168.56.1 and you need to configure an address in your guest OS (for example 192.168.56.101) to connect to it. I advise you to use a static address so you don't have to change the hosts file when the address of the guest changes.
If you reconfigured your (VM) network edit the Host OS hosts file and put the ip address (192.168.56.101) there to point to your necessary URLs.
| Adding alias in /etc/hosts for Linux OVM |
1,556,108,912,000 |
how do I set the hostname, FQDN, in yast?
I ran yast => network devices => network services => hostname/DNS:
YaST2 - lan @ arrakis
Network Settings
┌Global Options──Overview──Hostname/DNS──Routing──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│┌Hostname and Domain Name───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐│
││Hostname Domain Name ││
││arrakis▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒ bounceme.net▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒ ││
││[ ] Change Hostname via DHCP ││
││[x] Assign Hostname to Loopback IP ││
│└───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘│
│Modify DNS configuration Custom Policy Rule │
│Use Default Policy▒▒▒▒▒↓ ▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒↓ │
│┌Name Servers and Domain Search List────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐│
││Name Server 1 ┌Domain Search──────────────────────────────────────────────────────────────────────┐││
││▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒ │bounceme.net │││
││Name Server 2 │ │││
││▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒ │ │││
││Name Server 3 │ │││
││▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒ └───────────────────────────────────────────────────────────────────────────────────┘││
│└───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘│
│ │
│ │
│
which made this alteration:
linux-k7qk:~ #
linux-k7qk:~ #
linux-k7qk:~ # cat /etc/hosts
#
# hosts This file describes a number of hostname-to-address
# mappings for the TCP/IP subsystem. It is mostly
# used at boot time, when no name servers are running.
# On small systems, this file can be used instead of a
# "named" name server.
# Syntax:
#
# IP-Address Full-Qualified-Hostname Short-Hostname
#
127.0.0.1 localhost
# special IPv6 addresses
::1 localhost ipv6-localhost ipv6-loopback
fe00::0 ipv6-localnet
ff00::0 ipv6-mcastprefix
ff02::1 ipv6-allnodes
ff02::2 ipv6-allrouters
ff02::3 ipv6-allhosts
127.0.0.2 arrakis.bounceme.net arrakis
linux-k7qk:~ #
and now I see that my hostname, apparently, is arrakis -- is that correct?
linux-k7qk:~ #
linux-k7qk:~ # hostname
arrakis
linux-k7qk:~ #
linux-k7qk:~ # ping arrakis
PING arrakis.bounceme.net (127.0.0.2) 56(84) bytes of data.
64 bytes from arrakis.bounceme.net (127.0.0.2): icmp_seq=1 ttl=64 time=0.053 ms
64 bytes from arrakis.bounceme.net (127.0.0.2): icmp_seq=2 ttl=64 time=0.039 ms
64 bytes from arrakis.bounceme.net (127.0.0.2): icmp_seq=3 ttl=64 time=0.049 ms
^C
--- arrakis.bounceme.net ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.039/0.047/0.053/0.005 ms
linux-k7qk:~ #
linux-k7qk:~ #
linux-k7qk:~ # ping arrakis.bounceme.net
PING arrakis.bounceme.net (127.0.0.2) 56(84) bytes of data.
64 bytes from arrakis.bounceme.net (127.0.0.2): icmp_seq=1 ttl=64 time=0.034 ms
64 bytes from arrakis.bounceme.net (127.0.0.2): icmp_seq=2 ttl=64 time=0.040 ms
64 bytes from arrakis.bounceme.net (127.0.0.2): icmp_seq=3 ttl=64 time=0.037 ms
^C
--- arrakis.bounceme.net ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.034/0.037/0.040/0.002 ms
linux-k7qk:~ #
why does the prompt stay as k7qk?
Is the hostname actually the FQDN arrakis.bounceme.net?
(Please note that I'm not running a publicly available web server or anything like that, just on the LAN. I only want to ensure that the hostname has been changed.)
uname shows:
linux-k7qk:~ #
linux-k7qk:~ # uname -a
Linux arrakis 3.11.10-25-default #1 SMP Wed Dec 17 17:57:03 UTC 2014 (8210f77) x86_64 x86_64 x86_64 GNU/Linux
linux-k7qk:~ #
While I'm not running arrakis as a publicly available web server, or anything like that, I want it set up for that eventuality. Perhaps just on the LAN.
The FQDN arrakis.bounceme.net is registered on noip.com as a host name; part of their free services. (No, that's not a plug for noip, well, it is in a round-a-bout way...)
|
In order for your changes to be visible in your current shell you need to terminate the current session and log back in. Your hostname will have changed to arakis.
Explanation: a terminal session needs to be closed and re-opened for the profile to be re-read.
| how do I set a hostname in yast? |
1,556,108,912,000 |
I want to deny access to a specific URL. It isn't a whole website, it's a specific URL. I want to do it simply so that some applications including browsers can't make requests for it. I tried this:
$ cat /etc/hosts
127.0.0.1 http://url_to_block/url_to_block2/url_to_block3
but it didn't help me, for example, in the browser some website keeps on sending the ajax requests to that URL and receiving the responses from it.
Why not? How to do it?
|
The file /etc/hosts is only for mapping hostnames with IP addresses, not for URLs. There isn't any method that I'm aware which will allow you to do this across the board for all applications (that's built into a typical Linux distro), but you have a couple of options that will allow you to do it either per browser, via plugins, or using an HTTP proxy that filters all your requests from web browsers.
Plugins
2 such plugins for Firefox:
FoxFilter
URLFilter
There are others. For Chrome:
Blocksi
tinyFilter
I would probably go this route if it's just for yourself, or a couple of user's on a handful of systems.
Proxies
If it's for a larger domain of users then you'll need to use a HTTP proxy. Depending on which you choose, you may have to configure each user's browser independently.
If you choose to proxy all HTTP traffic using something like Squid, you can configure it as a transparent proxy, but this will have to be done on a system that's sitting in between your systems and the internet. Directions on how to set this up are discussed in this article, titled: Linux: Setup a transparent proxy with Squid in three easy steps.
| Block a certain URL? |
1,556,108,912,000 |
I'm running dropbear as SSH daemon on Debian (actually Raspbian). I tried setting
# /etc/hosts.allow
dropbear:192.168.1.1
# my static ip from which I SSH connect to the device
and
# /etc/hosts.deny
ALL:ALL
# block all others
Then I restarted the whole device. I could still SSH into the device from different IP addresses and even from remote. Did I configure the files wrong or does dropbear not support these two files?
|
Dropbear doesn't include any support for /etc/hosts.allow and /etc/hosts.deny. These files are managed by the TCP wrapper library (libwrap), which Dropbear doesn't use. Some third-party packages patch Dropbear for TCP wrapper support, but not Debian.
You can start Dropbear via tcpd to get TCP wrapper support.
/usr/sbin/tcpd /usr/sbin/dropbear -i
If you only want to filter by IP address, you can do it with iptables.
iptables -A INPUT -p tcp ! --dport 22 -j DROP
| Does dropbear take care of hosts.allow and hosts.deny? |
1,556,108,912,000 |
There seem to be more questions like this, excuse me if this is similar. I want to redirect to an IP address om my server computer.
This is what I put in my hosts file on the client computer (IMac):
192.168.3.2:8080. dev.dev
It does not resolve to that IP address. Should this normally work, or do I have to do extra things?
(The IP address works in the browser).
|
You do not specify the port in a hosts file. The hosts file expects IP and hostname on each line. Try removing the port, as well as the dot after the IP that I also see in your entry.
e.g.
192.168.3.2 dev
then try http://dev:8080 in your browser to get to that host:port. You may need to flush the cache on your client computer. From the terminal in OSX 10.6+:
sudo dscacheutil -flushcache
| Directing hostname to another computer within lan using hosts file |
1,556,108,912,000 |
I changed the name of the hosts in /etc/hosts to
IP.GOES.HERE newname
Apache does recognize the new server name; but still, via ssh i get this on the ssh prompt:
[root@oldname]
Why? Where else do I need to change the server name?
I'm using CentOS 6.3
|
On CentOS you set the system hostname in /etc/sysconfig/network.
This setting change takes effect on reboot. To change the hostname on a running system without rebooting use the hostname command.
| Changing the server name |
1,556,108,912,000 |
I recently installed my EPSON L3150 printer's drivers and about the same time I started having weird (and random) DNS name resolution errors in some applications (ssh, nextcloud-client), that I can only fix by restarting the NetworkManager service:
systemctl restart NetworkManager
For example:
$ ssh example.mydomain
ssh: Could not resolve hostname example.mydomain: Name or service not known
Another example is getent, which returns nothing and exits with code 2:
$ getent hosts example.mydomain
$ echo $?
2
But nslookup works fine:
$ nslookup example.mydomain
...
Name: example.mydomain
Address: 192.168.0.10
I narrowed it down to my nsswitch.conf file, which I blame my printer installer for changing it. I found a nsswitch.conf.bak lying besides a nsswitch.conf, created at the same time that I installed the printer drivers using dnf install epson/*.rpm.
The file had this change in the hosts line:
-hosts: files dns myhostname
+hosts: files myhostname mdns4_minimal [NOTFOUND=return] resolve [!UNAVAIL=return] dns
So the question is, why does the "new" configuration fail so randomly? How can I prevent it? I don't think just restoring the original file would be a solution since the new file seems to be auto-generated, it says so right at the top:
Generated by authselect on Sat Feb 12 18:53:06 2022
Uninstalling the driver would also not be a solution.
EDIT:
The culprit seems to be systemd-resolved. My network config is managed by NetworkManager and I setup two DNSs:
$ nmcli con show my-lan
...
ipv4.dns 192.168.0.1 8.8.8.8
It seems that whenever my computer wakes up from sleep, NetworkManager falls back to the second DNS:
$ systemd-resolve --status
...
Link 2 (enp39s0)
Current Scopes: DNS LLMNR/IPv4 LLMNR/IPv6
Protocols: +DefaultRoute +LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
Current DNS Server: 8.8.8.8
DNS Servers: 192.168.0.1 8.8.8.8
DNS Domain: mydomain
Thus causing systemd-resolve example.mydomain to fail (because 8.8.8.8 can't resolve my domain):
query: resolve call failed: 'example.mydomain' not found
I guess it's a bug in NetworkManager?
|
mdns4_minimal [NOTFOUND=return] is needed for multicast DNS for the Epson printer.
resolve [!UNAVAIL=return] enables name resolution through systemd-resolved (see https://www.freedesktop.org/software/systemd/man/nss-resolve.html).
If the command systemctl status systemd-resolved gives back Unit systemd-resolved.service could not be found. you may safely remove resolve [!UNAVAIL=return] from the hosts line. After that, resolving should be back to normal.
| Random DNS errors after change to nsswitch.conf |
1,556,108,912,000 |
I have installed spotify on arch linux via yay. When I open spotify it says it is offline even when I have working internet connection and throws error code: 4
This is log of spotify:
spotify: /usr/lib/libcurl-gnutls.so.4: no version information available (required by spotify)
/opt/spotify/spotify: /usr/lib/libcurl-gnutls.so.4: no version information available (required by /opt/spotify/spotify)
/opt/spotify/spotify: /usr/lib/libcurl-gnutls.so.4: no version information available (required by /opt/spotify/spotify)
/proc/self/exe: /usr/lib/libcurl-gnutls.so.4: no version information available (required by /proc/self/exe)
I will not post output of spotify --show-console because it contains login tokens and sessions ids.
How can I fix this so spotify can work normally ?
Thank you for help
|
This problem can be hopefully solved with one of these three steps:
1) Add spotify servers to hosts
You have to add this
0.0.0.0 weblb-wg.gslb.spotify.com
0.0.0.0 prod.b.ssl.us-eu.fastlylb.net
to /etc/hosts
2) Disable notifications
Sometime spotify get stuck while trying to make notification and stop loading stuff. You can disable them by adding this
ui.track_notifications_enabled=false
to ~/.config/spotify/current/.config/spotify/Users/[Some user]-user/prefs
3) Updating your system and AUR packages
To upgrade my system and AUR packages I used:
yay -Syyu
| Arch linux: spotify stuck offiline |
1,556,108,912,000 |
I'm wondering why a hosts.deny file with about 600 denied IPs and ranges would sudden empty itself. Is there any reason for this to happen automatically? No one else has connected to the server to make changes according to last and auth.log.
Thanks, Tmanok.
|
In this very specific instance, I had installed features from FreeNAS for additional server management, unfortunately it included a configuration database that I was not anticipating. Every reboot clears CLI configuration, basically a backup of the configuration exists on the machine which overrites any changes to existing running system configuration. The only want to prevent this is to update some database which then updates the backup configuration.
Thanks,
| Hosts.deny on FreeBSD suddenly empty? |
1,410,012,509,000 |
My /etc/hosts file looks like this:
# Your system has configured 'manage_etc_hosts' as True.
# As a result, if you wish for changes to this file to persist
# then you will need to either
# a.) make changes to the master file in /etc/cloud/templates/hosts.tmpl
# b.) change or remove the value of 'manage_etc_hosts' in
# /etc/cloud/cloud.cfg or cloud-config from user-data
127.0.1.1 ansible-server ansible-server
127.0.0.1 localhost
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
node1 0.0.0.0
node2 0.0.0.0
I have added the node1 and node2 and naturally the IP 0.0.0.0 is replaced by the IP of the node.
I would assume this works perfectly fine, however it doesn't. I thought SSH simply ignores the hosts file:
root@ansible-server:~# ssh root@node1
ssh: Could not resolve hostname node1: Name or service not known
root@ansible-server:~# ssh root@node2
ssh: Could not resolve hostname node2: Name or service not known
However, I can't ping these servers by their name either:
root@ansible-server:~# ping node1
ping: unknown host node1
root@ansible-server:~# ping node2
ping: unknown host node2
It is pretty clear I'm doing something really stupid here... but what?
Additional information: this server runs Ubuntu 14.04.2 LTS and is hosted on DigitalOcean. The server this is occurring on an Ansible server.
|
The format of lines in /etc/hosts is address first and name(s) second
0.0.0.0 node1
0.0.0.0 node2
192.168.1.1 myroutermaybe
8.8.8.8 googledns # in case DNS doesn't work for DNS???
127.0.0.1 localhost
or where several names map to the same address
0.0.0.0 node1 node2 node3 stitch626
ADDED, thanks to reminder by fpmurphy1:
The first name (if more than one) is used as the canonical or "official" name for gethostbyaddr etc, so if you have a domain name assigned to this machine/address it is usually clearest and most useful to put the Fully Qualified Domain Name (FQDN) as the first name.
| Why is my /etc/hosts file not being read? |
1,410,012,509,000 |
I know that you can add hostnames to /etc/hosts so that resolving them doesn't actually perform a DNS lookup. This would be a line such as
123.45.67.89 myhostname
My question is, what can I set the first part (IP address) to so that any attempted connection to this myhostname will always fail?
Edit: By fail, I mean an explicit ICMP rejection if possible. I don't want to wait for a timeout.
Reason
I have a VM instance running somewhere, sometimes. The cloud server sets a different IP on every launch and I'm not paying for any public domain name. I have a script that launches the instance and adds a line to /etc/hosts so I can easily ssh or open a browser tab to this host using myhostname.
That all works great, but I'd like to also remove this entry when the instance is killed, and have attempted connections to myhostname not incur any DNS lookups or (worse) try to connect to some same-named host on whatever local domain I'm sitting in at the moment.
|
So you want an IP address that a) is guaranteed not to match anything remote and b) will reject everything.
Make one:
iptables -I INPUT 1 -i lo -d 127.66.66.66 -p tcp -j REJECT --reject-with tcp-reset
iptables -I INPUT 2 -i lo -d 127.66.66.66 -j REJECT
(or whatever is the equivalent for your iptables firewall management tool of choice.)
Now all TCP connections to 127.66.66.66 will get a TCP RST packet in response, and everything else gets an ICMP rejection. (TCP RST will generally refuse a TCP connection even faster than ICMP, in my experience.)
IP address 127.66.66.66 is within the loopback segment, so it is guaranteed not to interfere with anything non-local.
| IP address in /etc/hosts that will always fail? |
1,410,012,509,000 |
I haven't found any browser that supports aliases defined in /etc/hosts even when single terms (without having to prepend in the browser with http://) are not interpreted as search terms for the default search engine (the case with badwolf browser).
Aliases work when prepended with http://, e.g.
http://bse but having to type http://defeats the purpuse of using them.
Cloudflare does not allow to access bitcoin.stackexchange.com via an alias, defined as
172.64.144.30 bitcoin.stackexchange.com bse
Error 1003 Ray ID: 85750e9fc9b65e4d • 2024-02-18 08:56:29 UTC
Direct IP access not allowed
What happened?
You've requested an IP address that is part of the Cloudflare network. A valid Host header must be supplied to reach the desired website.
When I attempt to reach bitcoin.stackexchange.com I do not get any error from Cloudflare.
Am I sending a host header when attempting to reach bitcoin.stackexchange.com from a desktop PC with 172.64.144.30 bitcoin.stackexchange.com bse appended to /etc/hosts?
Why do I get different browser behaviour when attempting to reach http://bse and bitcoin.stackexchange.com in the browser?
Are there any settings in browsers such as Brave, Firefox (about:config ?), Chromium that I could apply to
use my aliases seamlessly?
Other than in /etc/hosts my DNS queries are resolved by by the home router as shown by
the content of my /etc/resolv.conf
# Generated by Connection Manager
search home
nameserver 192.168.1.1
The research I did is man hosts. The hypothetical purpose of appending 172.64.144.30 bitcoin.stackexchange.com bse to /etc/hosts is limiting the number of certain DNS queries (that may make my profiling seem too radical and hence result in targeted surveillence) to the dns server of my router. I am aware of the possibility of reverse-DNS-lookups but let this be outside of the scope of the question.
|
Yes, your browser well send a Host header, and in the case of HTTPS access, additonally use SNI, to tell the server which site it wants to access. This is of course nothing unusual - a single server can serve any number of websites on any number of hosts, so it needs to know which one you're trying to access. The IP simply isn't enough for a lot of web servers these days.
Because of the above, a server may choose to serve you a default website or no website at all if it isn't configured to serve the host you're trying to access on it.
In Firefox, you can assign keywords to bookmarks so that just accessing the keyword will you take you to the bookmarked link. You don't have to use http://<keyword>, but just the bare keyword. Right-click on any bookmark and edit it, and the resulting dialog box will have a field for entering a keyword.
| Why do I get different browser behaviour when attempting to reach bitcoin.stackexchange.com and the alias of its IP in /etc/hosts? |
1,410,012,509,000 |
I want to know what is different between this two configuration :
First :
127.0.0.1 localhost my-hostname
192.168.10.12 host-a a.com
Second :
127.0.0.1 localhost my-hostname
192.168.10.12 host-a
192.168.10.12 a.com
What happened if i do not use aliases ?
|
For the /etc/hosts file these two alternatives are equivalent when resolving an address for a name. In both cases the names host-a and a.com will be resolved as the name host-a with address 192.168.10.12.
For a reverse lookup of name from IP address the two alternatives are slightly different. Both will return host-a as the canonical name. The first will include a.com as an alias.
You will get the first line of the file that matches, and the first text entry on that line is the canonical name.
Test framework (modify as necessary to test various scenarios):
perl -MData::Dumper -e 'print Dumper(gethostbyname("a.com"))'
perl -MData::Dumper -e 'print Dumper(gethostbyaddr(pack("C4", 192, 168, 10, 12), 0))'
Personally, I try to avoid using /etc/hosts, preferring to use DNS. However there are some cases where it can be useful and in these situations I always put the FQDN first on a line and its alias(es) afterwards. I do the same by line in the file for actual machine name and its services.
Example, where eleven is the server's name and it provides the web and ftp services:
192.168.10.11 eleven.contoso.com eleven
192.168.10.11 web.contoso.com web
192.168.10.11 ftp.contoso.com ftp
| /etc/hosts alias or multiple record |
1,410,012,509,000 |
Say I want to make sure that no malicious actor or process has made an attempt to falsely represent a legitimate web server with their own malicious server on Linux, anticipating that the user will access an external executable file.
For example, adding an entry like 12.34.56.78 legitsite.com, then the next time I visit legitsite.com/downloads/setup.sh, I actually download 12.34.56.78/downloads/setup.sh, which is malware.
It looks like at the very least I'd have to check /etc/hosts and /etc/resolv.conf, but is there anything else I would want to check too? Should I check the ip route command for entries appearing to target legitsite.com too?
|
There's nothing you can do if some other party has the root privileges to modify name resolution. That means they can already mess with all your software. You've lost. Nothing you can do on your machine could be suppressed by that almighty attacker.
But: the appropriate response to the possibility that someone was able to do that is very simple, and already deployed: It's TLS with certificate checking.
That's it. That's all there is to it. Use https. Nothing less will do, if you can't trust the network, than proper cryptography, and TLS is that.
Setting up your webserver to support that is trivial these days; use letsencrypt.org, and as soon as it works, enable strict host security.
| Is checking /etc/hosts and /etc/resolv.conf sufficient to know that no server is trying to falsely represent itself as another? |
1,410,012,509,000 |
What is the name of the program or script that opens and reads /etc/hosts when name resolution is needed? and is it different for every linux/unix distro?
I read the man page for hosts and tried whereis looking for a binary named hosts but whereis outputs filenames like /etc/hosts.allow and /etc/hosts.deny. I was under the impression that hosts.allow and hosts.deny were config files for TCPWrappers and now Im more confused than when I started.
|
There's no specific program that parses this file.
A number of standard files (e.g. /etc/hosts) are parsed by standard library files (e.g. gethostbyname(3)). However the story may be a lot more complicated.
Hostname resolution is typically controlled by an entry in /etc/nsswitch.conf.
e.g.
% grep hosts /etc/nsswitch.conf
hosts: files dns
This entry tells the resolver routines to use the "files" backend, and if the result isn't found there then to do a DNS lookup. Other values could be placed there (e.g. ldap or nis) which can change the way that hostnames are looked up.
These routines are typically called "Naming Services". The same concepts are also used for username lookups (passwd), group entries (group) and so on.
So when you do ping a.remote.host then the ping program will call a glibc library function, and that will load the routines defined in nsswitch.conf. The result is you won't see a specific program to do the lookup; ping does the work itself, via the libary and NS routines.
There's a program called getent that can be used to do the name searching; you specify a "database" (one of the entries in nsswitch.conf) and the value you want to search for.
So
getent hosts a.remote.host
will do a name lookup following the rules defined in nsswitch.conf. This is useful for testing purposes, and sometimes also in scripts.
--- addendum ----
This information is from Stephen's comment below, but very useful, so I am adding it to his answer.
strace getent hosts www.google.com 2>&1 | grep libnss_
will tell which library (or none) was used to resolve the name. If it says libnss_files, then /etc/hosts was used. If it says libnss_dns, then DNS was used. libnss_myhostname means that nothing worked, and a backup GNU system kicked it (and may have failed). If no library is listed, then you probably used a numerical address, like 127.0.0.1, so no resolver was necessary.
| program/application or script that reads /etc/hosts |
1,410,012,509,000 |
I am getting the following error when trying to set the Primary node for DRBD.
'node1' not defined in your config (for this host).
I know this is related to DNS/Hostname/Hosts and the config clusterdb.res. I know this because I originally got an error when trying to start clusterdb.res if node1 didn't resolve correctly. So what confuses me is that I can start the clusterdb.res if either use:
I have used this command on the hosts
hostnamectl set-hostname $(uname -n | sed s/\\..*//)
To make the hostname resolve to node1 instead of node1.localdomain
Or add node1.localdomain to the config, either works. But I have tried all combinations and can't seem to get this command to take :
drbdadm primary --force node1 && cat /proc/drbd
My Configs
/etc/drbd.d/clusterdb.res
resource clusterdb{
protocol C;
meta-disk internal;
device /dev/drbd0;
startup {
wfc-timeout 30;
outdated-wfc-timeout 20;
degr-wfc-timeout 30;
}
net {
cram-hmac-alg sha1;
shared-secret sync_disk;
}
syncer {
rate 10M;
al-extents 257;
on-no-data-accessible io-error;
verify-alg sha1;
}
on node1 {
disk /dev/sda3;
address 192.168.1.216:7788;
}
on node2 {
disk /dev/sda3;
address 192.168.1.217:7788;
}
}
/etc/hosts :
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.216 node1
192.168.1.217 node2
/etc/hostname
node#
My full write up ATM (wip)
Edits :
[root@node1 ~]# hostname
node1
[root@node1 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
127.0.1.1 node1
192.168.1.216 node1
192.168.1.217 node2
[root@node1 ~]#
Update: I have gotten this to work with LVM following this guide exactly, so I think my issue actually lies with the following lines of code. But for now I think i will stick with LVM since it works, unless somebody else really wants to work on this. (My working LVM writeup)
device /dev/drbd0;
or
device /dev/drbd0;
The reason I say this, is I used the same hosts/hostname/shortname/ip_addr but LVM and it worked, but maybe I missed something the first time, I fixed in my new VM Template (I started from scratch to build LVM)
|
You're not using the drbdadm command correctly. It wants the resource name, where you're giving it a node name.
Try this instead (from node1):
drbdadm up clusterdb
drbdadm primary --force clusterdb
As a side note, DRBD expects the hostnames in its config to be the same as uname -n.
| DRBD - 'node1' not defined in your config (for this host) - Error when setting Primary |
1,410,012,509,000 |
I was told today that /etc/hosts is not a viable option for testing a new destination server. The DNS for /etc/hosts modifications could still cache the old DNS settings and we would get incorrect results on testing.
The only option we were given is to setup a proxy server and set the proxy info in each browser.
I am wondering if this is the case do I need to setup every browner on my machine to test?
|
/etc/hosts does not use DNS at all so talking about DNS cache makes no sense. This file is authoritative over DNS concerning name resolution, as specified in /etc/nsswitch.conf (the Name Service Switch config file).
So if you enter a IP-host mapping in /etc/hosts it will always take precedence over DNS. (Of course, unless you modified /etc/nsswitch.conf, and you should have a good reason to do so.)
| Proxy server vs /etc/hosts file |
1,410,012,509,000 |
I switched from Ubuntu to Debian
And in debian the /etc/hosts file after a new install is (on a cloud server):
127.0.1.1 static.246.62.63.178.clients.your-server.de static
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
What is static.246.62.63.178.clients.your-server.de ?
Seems like the ip address is reserved for future use.
Do I leave it in place of replace it? E.g. is this okay (if my server is example.com):
127.0.0.1 localhost example.com
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback example.com
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
|
The IP address-as-a-name would usually be read in reverse (much like a DNS PTR entry), so the name should correspond to 178.63.62.246 rather than 246.62.63.178.
In this instance that would suggest a server from Hetzner.
You can remove the entry, but if that's what the machine is called - what it's known as - it would be prudent to have an entry of some sort. For example
127.0.0.1 localhost loopback
178.63.62.246 my.preferred.hostname.example.com
178.63.62.246 static.246.62.63.178.clients.your-server.de
| Debian /etc/hosts has unusual entry |
1,410,012,509,000 |
I have a server with search example.com in resolv.conf, and it works correctly for DNS lookups. That is, if I ping host, and host1.example.com is in DNS, it is found.
But if host1.example.com is in /etc/hosts instead of DNS, it is not found. I assume the entry in resolv.conf only applies to DNS.
Is there a way to make a domain search path that works for /etc/hosts entries, and if not, why not?
|
The simple and generally used method is to include both host1 and host1.example.com to /etc/hosts.
However, you can reach your goal using dnsmasq. dnsmasq will read your hosts file (configurable,this is the default), you just have to set your original ns as upstream in dnsmasq and localhost as ns in resolv.conf and you can keep your search option. You will get the added benefit of locally cached name service.
This is the most basic (probably server) setup, if you are using resolvconf like ns manager you have to configure that instead.
It's worth to keep in mind, if you make changes in /etc/hosts you must restart dnsmasq.
| How can I apply domain search paths to /etc/hosts lookups? |
1,410,012,509,000 |
I want to make a bunch of websites unaccessible on my computer.
My hosts.allow file:
sendmail: all
# /etc/hosts.allow: list of hosts that are allowed to access the system.
# See the manual pages hosts_access(5) and hosts_options(5).
#
# Example: ALL: LOCAL @some_netgroup
# ALL: .foobar.edu EXCEPT terminalserver.foobar.edu
#
# If you're going to protect the portmapper use the name "rpcbind" for the
# daemon name. See rpcbind(8) and rpc.mountd(8) for further information.
My hosts.deny file:
# /etc/hosts.deny: list of hosts that are allowed to access the system.
# See the manual pages hosts_access(5) and hosts_options(5).
#
# Example: ALL: some.host.name, .some.domain
# ALL EXCEPT in.fingerd: other.host.name, .other.domain
#
# If you're going to protect the portmapper use the name "rpcbind" for the
# daemon name. See rpcbind(8) and rpc.mountd(8) for further information.
#
# The PARANOID wildcard matches any host whose name does not match its
# address.
#
# You may wish to enable this to ensure any programs that don't
# validate looked up hostnames still leave understandable logs. In past
# versions of Debian this has been the default.
# ALL: PARANOID
ALL: .vk.com
ALL: .ria.ru
ALL: facebook.com
My hosts file:
127.0.0.1 localhost
127.0.0.1:82 testsecond
127.0.1.1 shc
127.0.2.2:81 someth.com
127.0.2.2:83 test
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe02::1 ip6-allnodes
fe02::2 ip6-allrouters
I do follow all recommendations about settings hosts* files and
I still CAN access them. I must do something really stupid or wrong.
For me it looks like they are just ignored.
|
hosts.deny is for servers, not clients running on your computer therefore you can't block websites with it. I suggest reading the hosts_access(5) man page for your system (Debian version, FreeBSD version).
By the way, there's a proposal by Lennart Poettering to get rid of tcpwrappers/tcpd in Fedora and OpenSSH will do the same.
| Linux hosts.deny settings are not applied |
1,410,012,509,000 |
I use a server with multiple IP addresses as a Squid proxy. Sadly every query to every IP address will expose the primary hostname of my webserver. So I added the following lines to my /etc/hosts file:
127.0.0.1 localhost
213.2XX.2XX.XXX main.mars.customer.com main
89.1XX.1XX.XX6 melle
89.1XX.131.X9 hannes
89.1XX.131.X0 vx
37.1XX.XXX.2X vx2
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
I restarted the networking and rebooted the server, but every hostname query to every IP leads to main.mars.customer.com. I hope you can hint at what is exposing my hostname and how to change it.
I'm testing this by using hostname lookup services on the web.
|
a server/machine can only "ever" have a single hostname. that is it.
that means that hostname will always return the last hostname set for that machine.
each IP can have a different name associated with it, in as such as when somebody connects to it, or from it, and you ask "what is the name associated with this IP?" you can get the assigned name for the IP, or, when you query for a name, it will show the IP associated with that name.
here the ("default" Linux) commands to use: getent hosts ip.ad.dr.ess and getent hosts nametoquery
Right then there is the "web" for that you'll be needing to fix/change things in the reverse lookup tables, sometimes referred to as rDNS (Reverse DNS) and for that you'll need to talk to the relevant DNS administrator to put that in the DNS for you to change.
The IPs looks like they below a hosting provider, in which case you might just find (If I guess the correct provider) in the control panel the reverse lookup feature under the IPs section.
| Create hostnames per IP on Debian |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.