date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,693,183,335,000 |
I have an offline system (a robot running Ubuntu, not connected to any network) I want to monitor the resource usage over time (mainly CPU and memory). I am used to Influx and Prometheus and i was wondering if there was a tool to save the monitoring data and store until I connect to the machine and export the data, and if possible export it to an online prometheus or influx instance.
I've looked at Promequeen, but it seems outdated and creates a new prometheus database instead of appending the data to one.
Are there some tools or an industry standard for this scenario?
|
In the end I've given up using Prometheus, which without a tool like Promqueen isn't made to receive out-of-date data.
I set up Telegraf on the robot and made it output the data in Influx line protocol, so I just have to retrieve the metrics file and import it into a Influx database.
| How to monitor an offline linux and export the data from time to time |
1,693,183,335,000 |
After running (1)sudo airmon-ng check list and (2)airmon-ng start <Iface>. I cannot come back to the original managed mode of the wifi interface. I am currently using: Kali 2022.3.
(1). Kill any process/service that is currently using any wireless interface.
(2). Start the 'monitor' mode in the Iface interface.
|
Just restarting the network service, solved the problem.
run: sudo systemctl restart NetworkManager
| How to connect to wifi after toggling the wifi card into monitor mode |
1,693,183,335,000 |
for quite some time I'm working at home now. Recently I questioned if my current internet connection is good enough for my activity on my laptop. For some activities I notice a high internet usage, but I want clear facts and not just a gut feeling.
So I wondered, if I can automatically monitor how many seconds/minutes... per day or hour I'm close to my internet connection limit. So I can get clue if an upgrade makes sense to speed up my activity.
Is there a tool for that? I looked at vnstat, but it's not what I'm looking for. Also I would need to look at Download and Upload separately.
|
Inspired by answers to this question :
you can read the amount of sent / received bytes from /sys/class/net/<interface>/statistics/tx_bytes and /sys/class/net/<interface>/statistics/rx_bytes
retrieve values every n minutes and record them in a CSV file
you may import data in your favorite spreadsheet program and compute anything you like
| Monitor how long per day or hour I'm close to my maximuim internet bandwith? So if it makes sense to buy an upgrade |
1,693,183,335,000 |
I'm trying to capture write to stdout (or stderr), but apparently the actual data is related to the exit event. I wrote a simple C program that writes to stdout and stderr.
#include <stdio.h>
int
main()
{
printf("Standard output\n");
fprintf(stderr, "Standard error\n");
return 0;
}
I compile it with
'gcc example.c -o example'.
If I now start sysdig with
sudo sysdig --unbuffered -X syscall.type=write and proc.name=example
and run ./example, I get to following output
91612 20:12:09.273696708 2 example (12909) > write fd=1(<f>/dev/pts/1) size=16
91613 20:12:09.273726879 2 example (12909) > write fd=2(<f>/dev/pts/1) size=15
No data is shown, and only the enter event (>) is shown. Also the stdout chisel doesn't produce any output
sudo sysdig --unbuffered -X -c stdout syscall.type=write and proc.name=example
Is the problem in my example program or where? If I run sysdig without any parameters, it sometimes shows also the exit event for some processes.
I'm running Ubuntu 20.04 distribution, and the version of sysdig is 0.26.4. For more information on sysdig, see GitHub repository of sysdig .
|
Apparently there was some problem with the version that came with Ubuntu. I downloaded the latest version (0.27.1) from sysdig website, and now the result is correct:
65099 19:55:54.695520567 7 example (10805) > write fd=1(<f>/dev/pts/0) size=16
65102 19:55:54.695526434 7 example (10805) < write res=16 data=
0x0000: 5374 616e 6461 7264 206f 7574 7075 740a Standard output.
65103 19:55:54.695528110 7 example (10805) > write fd=2(<f>/dev/pts/0) size=15
65104 19:55:54.695529507 7 example (10805) < write res=15 data=
0x0000: 5374 616e 6461 7264 2065 7272 6f72 0a Standard error.
| Sysdig does not show exit event for write syscall |
1,693,183,335,000 |
While installing Kali Linux there was an option to choose a primary network between Ethernet and Wireless network. But since my device has a problem with wireless I opted for Ethernet. Now, I have bought a new wifi adapter from TP-link, I still cannot access it to use the internet. It shows up when I plug but there is a message like this...
Wifi Network(Ralink)
Wifi is disabled
Furthermore, I can use this adapter in monitor mode and it works fine with wifite tool but I cannot run it normally for the internet.
So please help me run the internet using this external wifi adapter. How do I proceed?
|
Make sure the wifi adapter is correctly installed
Then use this command:
sudo nmtui
to activate a wifi connection and set the DHCP or static IP
Note:
In Linux for all distributions, there is no primary or secondary ethernet.
But you can use the default route (to 0.0.0.0) to ethernet or wifi connection.it is simply added gateway to your connection.
For example:
if you set the default route or gateway in a wifi connection, All traffic is passed through the wifi.
| How do I enable the Wi-Fi as a primary network tool in Kali Linux? |
1,693,183,335,000 |
Comcast is enforcing a new data cap on some customers reportedly starting as early as next month, January 2021. I'm both a data scientist and a tenant under someone else's internet plan; I know I've gone over 1 TB in a month before, so I'm trying to figure out how to monitor my data usage.
Previous similar questions on Unix SE and AskUbuntu talk about how to turn on the metered connection setting in Network Manager or just about general tools to monitor traffic in realtime.
Is there a way to record monthly network usage in Linux as smartphones do for data plans? Ideally, it would be something per an individual connection profile.
|
You are really only concerned with traffic that goes outside of your local network. Most OS-level network statistics are going to be at the interface level, not the destination/source level, and would only reflect that of a single device.
I'd suggest putting a firewall between your personal segment and that of the shared router. The firewall will count the traffic for you, and would only reflect traffic that you actually use.
| Recording monthly network usage for a metered connection? |
1,516,598,047,000 |
How to write a shell script to filter out the %Mem consumption of some process like Mozilla Firefox and LibreOffice Writer. Both values need to sum up.
|
Use below command to find the same
ps axo %mem,command | egrep "Firefox | Writer" | awk 'BEGIN {sum=0} {sum =sum +$1}END {print sum}'
| Memory resouce monitoring program |
1,516,598,047,000 |
I am building a communication network using Ethernet between a mobile platform and a base/controller station. So packets shall be sent from the mobile platform to the base station only. So, I need to check the connectivity between these two points maybe by using Port-mirroring or Ping-echoing. I have been looking for for the better way to create a realtime connectivity checking with providing detailed network analysis if possible.
Thanks for your time to reply.
|
So basically you have answered your own question:
If you have to ensure that there's always some traffic, either base station and mobile client need to regularly ping each other, or a third party that's also connected to the network via a switch needs to ping them regularly.
If the connectivity check is done inside the base station and the mobile client, you don't need a switch. If there's a third party who does the monitoring, you need a switch. If this third party also wants to analyze network traffic, you need to mirror the network traffic to the port of the third party.
So, port-mirroring and ping-echoing are totally different things. The first applies only if there's a third party doing the monitoring, the second applies only if no constant traffic can be ensured. Nothing prevents you from using both.
Edit:
Latency: You'll get the smallest latency if you're both passively monitoring (existing traffic) and ping in small intervals. Pings may be dropped because of collisions, so you need several failed pings before you can be sure the connection is broken. But if they get dropped, there will be other traffic, so if you monitor this passively, it shouldn't be a problem.
I'm not sure what you expect with respect to security and real-time measurements. Pings and ping replies are nearly instantanous, as well as passive traffic monitoring. For security considerations, you'll need to explain the sitation in more detail.
| What are the differences between Port-mirroring and Ping-echoing? |
1,516,598,047,000 |
Hello I have a set of raspberry pi installed on a client location. The RPis has internet access but not public ip. I was looking for an opensource solution to monitoring the devices some kind of software that can be instaled to send information to a cloud server about system health, uptime and such things.
Its not possible to install a vpn no get access to the raspberrys installed on the client or any kind of network solutions. The Rpis has internet access but cannot be accessed from outside
|
Icinga2 is also able to handle this configuration. Sattelite Icinga2 system can execute tests on the sattelite host (Raspberry) and send test results to the central monitoring host. It is also able to pull its configuration from the central monitoring host.
Although I like Icinga very much, I still feel that learning Icinga2 only to monitor just one Raspberry can be probably an overkill.
| remote server monitoring without public ip |
1,516,598,047,000 |
Condition: controlling 3D Image output of test code by mouse cause all CPUs to 100%; differential solutions do not help; no clicking of mouse on image
Motivation: to understand why mouse activates 3D Image process even when no evaluation of the notebook in Debian Mathematica
Fig. 1 My Desktop where trying to overlay the figure and delete it causes the CPUs to max; doing Clear[data] before that does not help
I need the notebook but the object keeps locking its use.
Test code
(* system info for WRI *)
SystemInformation[]
(* http://mathematica.stackexchange.com/q/38305/9815 *)
data = Table[
Exp[-3 (x^2 + y^2 + z^2)], {x, -1., 1, .01}, {y, -1.,
1, .01}, {z, -1., 1, .01}];
Image3D[data, ClipRange -> {{100, 200}, {0, 200}, {100, 200}},
ColorFunction -> Automatic] (* default colour function *)
System characteristics
masi@masi:~$ glxinfo | grep OpenGL
OpenGL vendor string: VMware, Inc.
OpenGL renderer string: Gallium 0.4 on llvmpipe (LLVM 3.5, 256 bits)
OpenGL version string: 3.0 Mesa 10.3.2
OpenGL shading language version string: 1.30
OpenGL context flags: (none)
OpenGL extensions:
masi@masi:~$ echo $LD_LIBRARY_PATH
masi@masi:~$
Mathematica startup option as instructed by Wolfram but still unsuccessful output
I run
masi@masi:~$ rm .Mathematica/
masi@masi:~$ su
root@masi:/home/masi# rm -r /usr/share/Mathematica/
Do clean start
masi@masi:~$ mathematica -cleanstart
Get a new university licence for Mathematica and activate
Start mathematica with the option
mathematica -mesa
I contacted Wolfram student support with a link to this thread, waiting their answer for three days.
OS: Debian 8.5
Linux kernel: 4.6 of backports set up as described here
Graphics: modesetting set up as described in the thread How Smooth is Upgrading Linux kernel in Debian 8.5?
Mathematica: 11 student edition
Mathematica documentation: How can I address broken 3D graphics on Linux with certain graphics cards
Hardware: Asus Zenbook UX303UA
Differential solutions: Clear[data], Quit kernel in Menu Evaluation
|
This was a bug in Mathematica 11.0.0 in Debian.
Their testing was mostly in Mint Linux, where they did not obverse the problems.
I posted a report about the case to Mathematica, where they promptly reacted and fixed the case.
The new release Mathematica 11.0.1 is significantly better in Debian 8.5.
| Why Mouse Activates 3D Image in non-evaluated NB of Debian Mathematica? |
1,516,598,047,000 |
I tried many ways to launch an application i7z_GUI on Debian, but it just won't start.
I tried it both as root as shown below and as a user:
su-to-root -X -c /usr/sbin/i7z_GUI
gksu -u root /usr/sbin/i7z_GUI
./usr/sbin/i7z_GUI
After properly setting up sudo, it wouldn't work either:
Segmentation fault
it says.
|
Solved by compiling the application from its source code.
| How to launch i7z_GUI (segmentation fault)? |
1,516,598,047,000 |
As I am unable to install psutil to work on raspberry pi on python3, I am looking for any alternative (or a proper way to install it).
What I want is to be able to see what's the status of the computer in terms of CPU and RAM available, so that if that's the case I can "sleep" the script until the computer relaxes, in order to avoid that the script just finishes without any apparent reason.
Any idea?
|
While it appears there's no alternative, I will follow this strategy
http://www.raspberrypi.org/forums/viewtopic.php?f=32&t=
| Alternative for psutil for ARM processors (raspberry pi) and python3 |
1,516,598,047,000 |
I've had 'accton' (in package psacct-6.3.2-63.el6_3.3.x86_64) turned on as I want to be able to report on a particular process (so I've written a script that will take the psacct file and get the information I want).
What I've not been able to determine is how forked process time is handled in relation to the parent and children processes.
Thus the question is:
Does the parent process have the accumulated time of all the children as well as itself? Does this relationship differ between system, user and elapsed time?
In my circumstance it's fairly important to understand this behaviour between the children and parent process time.
Thanks in advance,
Sebastien
|
All the times are per-process (in older versions of Linux, they were per-thread). Metering starts when a process is forked, continues over all of its execs, and ends when it exits. Times from its children are not included; they are available in the respective records for each child when the child exits. If you're using the acct_v3 format, the records include pid and ppid, so it's theoretically possible to reconstruct process trees and compute the equivalent of getrusage(RUSAGE_CHILDREN,...).
| CPU time that accton (psacct) records - recorded time relationship between parent and children processes |
1,516,598,047,000 |
My Linux version is Linux 3.4.76-65.111.amzn1.x86_64 x86_64. It's AWS.
These days I'm suffering from the absence of monitoring tools, processes sometimes die without knowing it.
My need is simple.
Though they might be achieved by my own shell script, I want it managed by tools.
Alert if specific processes die
Alert if resource(hard disk space, CPU, MEMORY) reaches to treshold
Tracking resource usage
Free for corporation use
|
I'd use Nagios. In a survey I did awhile back I noticed that this was a big majority favorite. Notice that a lot of sites use multiple monitoring tools.
I'd like to remind you that "free" only means that the source code is available, effort (in any case) is still required and that is NOT free.
BTW, Nagios comes in a free and a paid supported version.
| Recommendations for Free Linux monitoring tools [closed] |
1,516,598,047,000 |
There are tools available like MRTG , Cacti etc through which we can monitor network traffic on network interfaces on a host. The network traffic flow information that is provided by these tools is the total traffic that is flowing through the interface . But I want to drill down further like bandwidth classification based on application layer protocols like HTTP , SMTP , 8080 etc.
Also , would be great if the tool provides bandwidth classification based on the contributor of traffic . i.e Which IP Address are contributors to incoming and outgoing traffic .
Any tools or plugins to MRTG , Cacti etc which provides such information
|
I would recommend ntop for this. Mike briefly mentions it in his answer, but it seems more of a side note. I personally think ntop is the best candidate.
Ntop is similar to tools like MRTG & Cacti in that it's a long running background process that gathers information and lets you browse through it via a web browser.
It also has a command line mode that has the same feel to it as top does for processes.
It can run on an individual host, or you can place it on a gateway server and it can analyze all traffic which flows through the box.
It can show the amount of traffic flowing between 2 hosts, how much data each protocol is using, the currently active sessions, and a lot more.
This screenshot is a bit dated, as they seem to have released a new version with a redesigned UI, but I couldn't find any screenshots of it, and I don't presently have it installed.
| Bandwidth Management tool |
1,516,598,047,000 |
Monitoring the folder and when the .CPP file arrived inside the folder then automatically compile the program in ubuntu
can anyone help me to solve this problem please?
|
For such a simple case real monitoring (inotify) is probably overkill.
#! /bin/bash
file_path='/path/to/file.cpp'
while true; do
if [ -f "$file_path" ]; then
: do something
exit 0
else
sleep 1
fi
done
| Monitoring folder and compile [closed] |
1,516,598,047,000 |
Is there a program to monitor all ressource utilzation at once for a personal computer : CPU, memory, hard drives ?
|
Take a look to Conky. Also searching for CPU widgets for ArchLinux will give you a lot of results for what you want.
Additionally, if you want to know all that information via CLI, you can have it quickly with:
echo CPU: && mpstat && echo && echo MEMORY: && free && echo && echo "DISK USAGE:" && df -h
| Monitor all ressources use? [closed] |
1,516,598,047,000 |
I have a Sensu SERVER setup and a Sensu CLIENT.
Services sensu-server, sensu-client, uchiwa, sensu-api are running on SERVER.
Services sensu-client is running on CLIENT.
All the checks I described in /etc/sensu/conf.d in SERVER are listed in uchiwa.
Unfortunately, I can't see any client listed. Including the sensu-client running on SERVER.
in SERVER:
$ cat client.json
{
"client": {
"name": "server",
"address": "10.41.10.1",
"subscriptions": ["ALL" ]
}
}
in CLIENT:
$cat client.json
{
"client": {
"name": "compute1",
"address": "10.41.10.10",
"subscriptions": [ "system","cmpt" ]
}
}
How to debug this issue? I can't see any errors. I don't know if the problem is with uchiwa or sensu? Has anyone resolved a similar issue?
|
I solved this issue. RabbitMQ credentials in /etc/sensu/conf.d/rabbitmq.json were not correct. I created a new user by going to SERVER:4567 (RabbitMQ GUI) and added those credentials in the json file.
| Unable to see client on Uchiwa dashboard (Sensu Monitoring) |
1,516,598,047,000 |
I have an Ubuntu 16.04 xenial Nginx server environment with postfix and a few webapps under /var/www/html.
How could I notify myself by my an email sent to my personal Gmail account, if my site is down?
The desired state is that if the webpages or at least in homepage, doesn't give https status 200 (OK), I'll get daily email, per each day the problem wasn't taken care off.
For example, each day I'll get:
Hello, your site domain.tld is down. Please fix it.
|
As mentioned in the comments, there are literally dozens of ways you can do this.
The most basic possible would be to call wget or curl from a daily cron job and check their exit code (if they can't download the page, they'll return a non-zero exit code), then use that to trigger an e-mail. While this approach has a few issues (for example, both wget and curl follow redirects, so it will also succeed for pretty much any 3xx code provided that it points to an accessible page).
A step up from that is a tool like monit, which has the added bonus that you can have it watch your web-server process and let you know if that stops running, and do all kinds of other useful checks (including allowing for mostly arbitrary scripted network service checks, checking network interface status, etc). This is probably the simplest option on most single servers.
If you've got a bunch of servers, you might look instead at something like nagios, which is designed for handling network-wide sanity checks.
Keep in mind also however that pretty much regardless of which option you go with, you're probably going to need to run a local mail server to forward messages to your gmail account (though this is really easy to do provided you're not using a hosting service that blocks outbound SMTP connections).
| Notifying myself thtat my webapp/website is down (Nginx environment) [closed] |
1,377,294,573,000 |
Linux as router: I have 3 Internet providers, each with its own modem.
Provider1, which is gateway address 192.168.1.1
Connected to linux router eth1/192.168.1.2
Provider2, gateway address 192.168.2.1
Connected to linux router eth2/192.168.2.2
Provider3, gateway address 192.168.3.1
Connected to linux router eth3/192.168.3.2
________
+------------+ /
| | |
+----------------------+ Provider 1 +--------|
__ |192.168.1.2 |192.168.1.1 | /
___/ \_ +------+-------+ +------------+ |
_/ \__ | eth1 | +------------+ /
/ \ eth0| |192.168.2.2 | | |
|Client network -----+ ROUTER eth2|--------------+ Provider 2 +------| Internet
\10.0.0.0/24 __/ | | |192.168.2.1 | |
\__ __/ | eth3 | +------------+ \
\___/ +------+-------+ +------------+ |
|192.168.3.2 | | \
+----------------------+ Provider 3 +-------|
|192.168.3.1 | |
+------------+ \________
I would like to route the clients in network 10.0.0.0/24 by source IP to different gateways.
The interface to the client network is eth0/10.0.0.1, which is the default gateway for all clients.
For example:
10.0.0.11 should be routed to Provider1 @ eth1
10.0.0.12 should be routed to Provider2 @ eth2
...and so on...
I think I need to use ip route and iptables for SNAT, but I have not figured out exactly how.
Here is the script I have so far.
ipv4 forwarding is enabled.
#!/bin/bash
# flush tables
ip route flush table connection1
ip route flush table connection2
ip route flush table connection3
# add the default gateways for each table
ip route add table connection1 default via 192.168.1.1
ip route add table connection2 default via 192.168.2.1
ip route add table connection3 default via 192.168.3.1
# add some IP addresses for marking
iptables -t mangle -A PREROUTING -s 10.0.0.11 -j MARK --set-mark 1
iptables -t mangle -A PREROUTING -s 10.0.0.12 -j MARK --set-mark 2
iptables -t mangle -A PREROUTING -s 10.0.0.13 -j MARK --set-mark 3
# add the source nat rules for each outgoing interface
iptables -t nat -A POSTROUTING -o eth1 -j SNAT --to-source 192.168.1.2
iptables -t nat -A POSTROUTING -o eth2 -j SNAT --to-source 192.168.2.2
iptables -t nat -A POSTROUTING -o eth3 -j SNAT --to-source 192.168.3.2
# link routing tables to connections (?)
ip rule add fwmark 1 table connection1
ip rule add fwmark 2 table connection2
ip rule add fwmark 3 table connection3
#default route for anything not configured above should be eth2
|
Here is a similar setup from one of our routers (with some irrelevant stuff snipped). Note that this handles incoming connections as well.
Note the use of variables instead of hard-coded mark numbers. So much easier to maintain! They're stored in a separate script, and sourced in. Table names are configured in /etc/iproute2/rt_tables. Interface names are set in /etc/udev/rules.d/70-persistent-net.rules.
##### fwmark ######
iptables -t mangle -F
iptables -t mangle -X
iptables -t mangle -A PREROUTING -j CONNMARK --restore-mark
iptables -t mangle -A PREROUTING -m mark ! --mark 0 -j RETURN # if already set, we're done
iptables -t mangle -A PREROUTING -i wan -j MARK --set-mark $MARK_CAVTEL
iptables -t mangle -A PREROUTING -i comcast -j MARK --set-mark $MARK_COMCAST
iptables -t mangle -A PREROUTING -i vz-dsl -j MARK --set-mark $MARK_VZDSL
iptables -t mangle -A POSTROUTING -o wan -j MARK --set-mark $MARK_CAVTEL
iptables -t mangle -A POSTROUTING -o comcast -j MARK --set-mark $MARK_COMCAST
iptables -t mangle -A POSTROUTING -o vz-dsl -j MARK --set-mark $MARK_VZDSL
iptables -t mangle -A POSTROUTING -j CONNMARK --save-mark
##### NAT ######
iptables -t nat -F
iptables -t nat -X
for local in «list of internal IP/netmask combos»; do
iptables -t nat -A POSTROUTING -s $local -o wan -j SNAT --to-source «IP»
iptables -t nat -A POSTROUTING -s $local -o comcast -j SNAT --to-source «IP»
iptables -t nat -A POSTROUTING -s $local -o vz-dsl -j SNAT --to-source «IP»
done
# this is an example of what the incoming traffic rules look like
for extip in «list of external IPs»; do
iptables -t nat -A PREROUTING -p tcp -d $extip --dport «port» -j DNAT --to-destination «internal-IP»:443
done
And the rules:
ip rule flush
ip rule add from all pref 1000 lookup main
ip rule add from A.B.C.D/29 pref 1500 lookup comcast # these IPs are the external ranges (we have multiple IPs on each connection)
ip rule add from E.F.G.H/29 pref 1501 lookup cavtel
ip rule add from I.J.K.L/31 pref 1502 lookup vzdsl
ip rule add from M.N.O.P/31 pref 1502 lookup vzdsl # yes, you can have multiple ranges
ip rule add fwmark $MARK_COMCAST pref 2000 lookup comcast
ip rule add fwmark $MARK_CAVTEL pref 2001 lookup cavtel
ip rule add fwmark $MARK_VZDSL pref 2002 lookup vzdsl
ip rule add pref 2500 lookup comcast # the pref order here determines the default—we default to Comcast.
ip rule add pref 2501 lookup cavtel
ip rule add pref 2502 lookup vzdsl
ip rule add pref 32767 lookup default
The routing tables get set up in /etc/network/interfaces, so that taking down an interface makes it switch to using a different one:
iface comcast inet static
address A.B.C.Q
netmask 255.255.255.248
up ip route add table comcast default via A.B.C.R dev comcast
down ip route flush table comcast
Note: If you're doing filtering as well (which you probably are) you'll also need to add the appropriate rules to FORWARD to ACCEPT the traffic. Especially for any incoming traffic.
| Linux as router with multiple internet providers |
1,377,294,573,000 |
I use the auto generated rules that come from OpenWRT as an example of NAT reflection (NAT loopback).
So let's pretend there's a network 192.168.1.0/24 with two hosts (+ router): 192.168.1.100 and 192.168.1.200. The router has two interfaces LAN (br-lan) and WAN (eth0). The LAN interface has an IP 192.168.1.1 and the WAN interface has an IP 82.120.11.22 (public). There's a www server on 192.168.1.200. We want to connect from 192.168.1.100 to the web server using the public IP address.
If you wanted to redirect WAN->LAN so people from the internet can visit the web server, you would add the following rules to iptables:
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
iptables -t nat -A PREROUTING -i eth0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.1.200:80
I know what the rules mean. But there's also two other rules, which are responsible for NAT reflection. One of them isn't that clear to me as the ones above. So the first rule looks like this:
iptables -t nat -A PREROUTING -i br-lan -s 192.168.1.0/24 -d 82.120.11.22/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.1.200
And this means that all the traffic from the 192.168.1.0/24 network that is destined to the public IP to the port 80 should be sent to the local web server, which means that I type the public IP in firefox and I should get the page returned by the server, right? All the other forwarding magic in the filter table was already done, but I still can't connect to the web server using the public IP. The packet hit the rule, but nothing happens.
We need another nat rule in order to make the whole mechanism work:
iptables -t nat -A POSTROUTING -o br-lan -s 192.168.1.0/24 -d 192.168.1.200/32 -p tcp -m tcp --dport 80 -j SNAT --to-source 192.168.1.1
I don't know why the rule is needed. Can anyone explain what exactly the rule does?
|
For a NAT to work properly both the packets from client to server and the packets from server to client must pass through the NAT.
Note that the NAT table in iptables is only used for the first packet of a connection. Later packets related to the connection are processed using the internal mapping tables established when the first packet was translated.
iptables -t nat -A PREROUTING -i br-lan -s 192.168.1.0/24 -d 82.120.11.22/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.1.200
With just this rule in place the following happens.
The client creates the initial packet (tcp syn) and addresses it to the public IP. The client expects to get a response to this packet with the source ip/port and destination ip/port swapped.
Since the client has no specific entries in its routing table it sends it to its default gateway. The default gateway is the NAT box.
The NAT box receives the intial packet, modifies the destination IP, establishes a mapping table entry, looks up the new destination in its routing table and sends the packets to the server. The source address remains unchanged.
The Server receives the initial packet and crafts a response (syn-ack). In the response the source IP/port is swapped with the destination IP/port. Since the source IP of the incoming packet was unchanged the destination IP of the reply is the IP of the client.
The Server looks up the IP in its routing table and sends the packet back to the client.
The client rejects the packet because the source address doesn't match what it expects.
iptables -t nat -A POSTROUTING -o br-lan -s 192.168.1.0/24 -d 192.168.1.200/32 -p tcp -m tcp --dport 80 -j SNAT --to-source 192.168.1.1
Once we add this rule the sequence of events changes.
The client creates the initial packet (tcp syn) and addresses it to the public IP. The client expects to get a response to this packet with the source ip/port and destination ip/port swapped.
Since the client has no specific entries in its routing tables it sends it to its default gateway. The default gateway is the NAT box.
The NAT box receives the intial packet, following the entries in the NAT table it modifies the destination IP, source IP and possiblly source port (source port is only modified if needed to disambiguate), establishes a mapping table entry, looks up the new destination in its routing table and sends the packets to the server.
The Server receives the initial packet and crafts a response (syn-ack). In the response the source IP/port is swapped with the destination IP/port. Since the source IP of the incoming packet was modified by the NAT box the destination IP of the packet is the IP of the NAT box.
The Server looks up the IP in its routing table and sends the packet back to the NAT box.
The NAT box looks up the packet's details (source IP, source port, destination IP, destination port) in its NAT mapping tables and performs a reverse translation. This changes the source IP to the public IP, the source port to 80, the destination IP to the client's IP and the destination port back to whatever source port the client used.
The NAT box looks up the new destination IP in its routing table and sends the packet back to the client.
The client accepts the packet.
Communication continues with the NAT translating packets back and forth.
| How does NAT reflection (NAT loopback) work? |
1,377,294,573,000 |
I'd like to redirect local requests to port which is translated with NAT. I have following rules:
iptables -t nat -A PREROUTING -p tcp --dport 9020 -j DNAT --to 10.0.3.11:80
however request coming from localhost are rejected:
wget http://127.0.0.1:9020
Connecting to 127.0.0.1:9020... failed: Connection refused.
When I'm connecting from any other computer it works. Is there a way how to do this without recompiling kernel with CONFIG_IP_NF_NAT_LOCAL=y? https://wiki.debian.org/Firewalls-local-port-redirection (which seems to be obsolete).
Update:
iptables -L -v -n --line-numbers -t nat:
Chain PREROUTING (policy ACCEPT 26 packets, 3230 bytes)
num pkts bytes target prot opt in out source destination
4 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:9020 to:10.0.3.11:80
Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
num pkts bytes target prot opt in out source destination
1 0 0 MASQUERADE all -- * * 10.0.0.0/16 0.0.0.0/0
|
Based on @Hauke Laging comments I put together this:
# connections from outside
iptables -t nat -A PREROUTING -p tcp --dport 9020 -j DNAT --to 10.0.3.11:80
# for local connection
iptables -t nat -A OUTPUT -p tcp --dport 9020 -j DNAT --to 10.0.3.11:80
# Masquerade local subnet
iptables -t nat -A POSTROUTING -s 10.0.3.0/16 -j MASQUERADE
iptables -A FORWARD -o lxcbr0 -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -i lxcbr0 -o eth0 -j ACCEPT
iptables -A FORWARD -i lxcbr0 -o lo -j ACCEPT
where lxcbr0 is interface in 10.0.3.0/16 subnet and eth0 is interface with public IP addrees.
| iptables: redirect local request with NAT |
1,377,294,573,000 |
I made two experiments. This is the network for both of them:
[private network] [public network]
A -------------------- R ----------------- B
192.168.0.5 192.168.0.1|192.0.2.1 192.0.2.8
A's default gateway is R. R has IPv4 forwarding active and the following iptables rule:
iptables -t nat -A POSTROUTING -p TCP -j MASQUERADE --to-ports 50000
The intent is, anything TCP from A will be masked as 192.0.2.1 using R's port 50000.
I published a TCP service on port 60000 on B using nc -4l 192.0.2.8 60000.
Then I opened a connection from A: nc -4 192.0.2.8 60000
A started sending packets that looked like this:
192.168.0.5:53269 -> 192.0.2.8:60000
R translated that into
192.0.2.1:50000 -> 192.0.2.8:60000
So far, so good.
I then tried to open the following client on R: nc -4 192.0.2.8 60000 -p 50000. I sent messages, nothing happens. No packets can be seen on R's tcpdump.
Because the masquerade rule exists, or at least because it's active, I would have expected R's nc to fail with the error message "nc: Address already in use", which is what happens if I bind two ncs to the same port.
I then waited a while so conntrack's mapping would die.
The second experiment consisted on me trying to open R's client first. R starts talking to B just fine. If I then open the connection from A, its packets are ignored. A's SYNs arrive at R, but they aren't answered, not even by ICMP errors. I don't know if this is because R knows it ran out of masquerading ports or because Linux is just flat-out confused (it technically masks the port but the already established connection somehow interferes).
I feel the NAT's behaviour is wrong. I could accidentally configure a port for both masquerading (particularly, by not specifying --to-ports during the iptables rule) and a service, and the kernel will drop connections silently. I also don't see any of this documented anywhere.
For example:
A makes a normal request to B. R masks using port 50k.
A makes a DNS query to R. Being that T is recursive, R (using, out of sheer coincidence, ephemeral port 50k) queries authoritative nameserver Z on port 53.
A collision just happened; R is now using port 50k for two separate TCP connections.
I guess it's because you don't normally publish services on routers. But then again, would it hurt the kernel to "borrow" the port from the TCP port pool when it becomes actively masqueraded?
I know that I can separate my ephemeral ports from my --to-ports. However, this doesn't seem to be the default behaviour. Both NAT and the ephemeral ports default to 32768-61000, which is creepy.
(I found the ephemeral range by querying /proc/sys/net/ipv4/ip_local_port_range, and the NAT range by simply NATting lots of UDP requests in a separate experiment - and printing the source port at the server side. I couldn't find a way to print the range using iptables.)
|
would it hurt the kernel to "borrow" the port from the TCP port pool when it becomes actively masqueraded?
I guess the answer is "no, but it doesn't matter much."
I incorrectly assumed R only used the destination transport address of the response packet to tell whether it was headed towards A or itself. It actually seems to use the entire source-destination transport addresses tuple to identify a connection. Therefore, it's actually normal for NAT to create multiple connections using the same (R owned) port; it doesn't create any confusion. Consequently, the TCP/UDP port pools don't matter.
It's pretty obvious now that I think about it.
I then tried to open the following client on R: nc -4 192.0.2.8 60000 -p 50000. I sent messages, nothing happens. No packets can be seen on R's tcpdump.
This is the part of the experiments where I messed up.
The failure happens because both the source and destination transport addresses are the same, not just because the source address is the same.
If I do, say, nc -4 192.0.2.8 60001 -p 50000, it actually works. Even if it's using the same port as a NAT mask.
I feel the NAT's behaviour is wrong. I could accidentally configure a port for both masquerading (particularly, by not specifying --to-ports during the iptables rule) and a service, and the kernel will drop connections silently.
It won't, because the masked connections and the R-started connections will most likely have different destinations.
Because the masquerade rule exists, or at least because it's active, I would have expected R's nc to fail with the error message "nc: Address already in use", which is what happens if I bind two ncs to the same port.
I'm still looking for a bulletproof answer to this, but everything seems to point to "it's an adverse consequence of how it's implemented, and it's so small we're willing to live with it."
| Why doesn't NAT reserve ports from the machine's TCP and UDP port pool? |
1,377,294,573,000 |
I'm trying to figure out how NAT and iptables work. While I'm in the trial-and-error phase of learning about it, I found two somewhat conflicting howtos.
One howto uses a script to call iptables rules one after another. The script seems to be named and stored such that it is executed early during system boot, and I think a problem may be that other scripts may be called after it and undo its intentions. I even think I did this once by accident when I saved and renamed the original script (00-firewall) using a backup (00-firewall-old). The example script form the howto is:
#!/bin/sh
PATH=/usr/sbin:/sbin:/bin:/usr/bin
#
# delete all existing rules.
#
iptables -F
iptables -t nat -F
iptables -t mangle -F
iptables -X
# Always accept loopback traffic
iptables -A INPUT -i lo -j ACCEPT
# Allow established connections, and those not coming from the outside
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -m state --state NEW -i ! eth1 -j ACCEPT
iptables -A FORWARD -i eth1 -o eth0 -m state --state ESTABLISHED,RELATED -j ACCEPT
# Allow outgoing connections from the LAN side.
iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT
# Masquerade.
iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE
# Don't forward from the outside to the inside.
iptables -A FORWARD -i eth1 -o eth1 -j REJECT
# Enable routing.
echo 1 > /proc/sys/net/ipv4/ip_forward
Another howto does not use a script but a file where some filter rules are defined. It looks like this:
*filter
# Allows all loopback (lo0) traffic and drop all traffic to 127/8 that doesn't use lo0
-A INPUT -i lo -j ACCEPT
-A INPUT ! -i lo -d 127.0.0.0/8 -j REJECT
# Accepts all established inbound connections
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
# Allows all outbound traffic
# You could modify this to only allow certain traffic
-A OUTPUT -j ACCEPT
# Allows HTTP and HTTPS connections from anywhere (the normal ports for websites)
-A INPUT -p tcp --dport 80 -j ACCEPT
-A INPUT -p tcp --dport 443 -j ACCEPT
# Allows SSH connections
# THE -dport NUMBER IS THE SAME ONE YOU SET UP IN THE SSHD_CONFIG FILE
-A INPUT -p tcp -m state --state NEW --dport 30000 -j ACCEPT
# Now you should read up on iptables rules and consider whether ssh access
# for everyone is really desired. Most likely you will only allow access from certain IPs.
# Allow ping
-A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT
# log iptables denied calls (access via 'dmesg' command)
-A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7
# Reject all other inbound - default deny unless explicitly allowed policy:
-A INPUT -j REJECT
-A FORWARD -j REJECT
COMMIT
What are the pros and cons of both ways of setting up iptables? Background info is much appreciated because I'm quite new to the whole thing. For example, I don't get who is reading the file from the latter howto, and how it is processed. My feeling tells me the second howto suggests a better solution, but why exactly?
|
I've used both techniques in the past. These days, I'm gravitating towards a hybrid of the two.
If your ruleset has five or six simple rules, either method is fine. Things start to become interesting when you have big rulesets: large installations, your firewalling box does a bit of routing, etc.
Just remember, you can shoot yourself in the foot no matter how you load your rulesets. :)
Script-Based
You make a bash script, a Perl script, a Python script — hell, write a Lisp or Befunge program for all anyone cares! In the script, you create all the Netfilter rules you want.
The upsides:
Directness. You're experimenting with rules, and you just copy and paste the ones that work from the command line straight to the script.
Dynamic firewalling. One client of mine runs OpenVPN setups for their own clients, and each client gets their own instance of OpenVPN for security, firewalling and accounting reasons. The first-line-of-defence firewall needs to open the OpenVPN ports dynamically for each (IP,port) tuple. So the firewalling script parses the manifest of OpenVPN configs and dynamically pokes the required holes. Another one stores web server details on LDAP; the iptables script queries the LDAP server, and automatically allows ingress to the web servers. On large installations, this is a great boon.
Cleverness. My firewall scripts allow remote administration, even without lights out management: after the ruleset is loaded, if the operator doesn't respond within a couple of minutes, the ruleset is rolled back to the last one known to work. If for some reason that fails too, there's a third (and fourth) failback of decreasing security.
More cleverness: you can open up SSH access to your netblock at the beginning of the script, then rescind it at the end of the script (and let filtered SSH sessions in). So, if your script fails, you can still get in there.
Online examples. For some reason, most of the examples I've seen online used invocations of iptables (this may be influenced by the fact my first few firewall setups predated iptables-save, and also Netfilter — but that's another story)
The downsides:
One syntax error in the script and you're locked out of your firewall. Some of the cleverness above is down to painful experiences. :)
Speed. On embedded Linux boxen, a bash (or even dash) script will be a slow, slow thing to run. Slow enough that, depending on your security policy, you may need to consider the order of rule addition — you could have a short-lived hole in your defences, and that's enough. Ruleset loading is nowhere near atomic.
Complexity. Yes, you can do amazing things, but yes, you can also make the script too complex to understand or maintain.
Ruleset-Based
Add your rules to a ruleset file, then use iptables-restore to load it (or just save your existing ruleset using iptables-save). This is what Debian does by default.
The pros:
Speed. iptables-restore is a C program, and it's deliciously fast compared to shell scripts. The difference is obvious even on decent machines, but it's orders of magnitude faster on more modest hardware.
Regularity. The format is easier to understand, it's essentially self-documenting (once you get used to Netfilter's peculiarities).
It's the Standard Tool, if you care about that.
It saves all the Netfilter tables. Too many Netfilter tools (including iptables) only operate on the filter table, and you could forget you have others ones at your disposal (with possibly harmful rules in them). This way, you get to see all the tables.
The cons:
Lack of flexibility.
With a lack of templating/parametrisation/dynamic features, repetition can lead to less maintainable rulesets, and to huge ruleset bugs. You don't want those.
A Hybrid Solution — Best of Both Worlds
I've been developing this one for a while in my Copious Free Time. I'm planning on using the same script-based setup I have now, but once the ruleset is loaded, it saves it with iptables-save and then caches it for later. You can have a dynamic ruleset with all its benefits, but it can be loaded really quickly when, e.g., the firewall box reboots.
| iptables: The "script" way or the "*filter, rules, COMMIT" way? |
1,377,294,573,000 |
In GlusterFS, lets say i have 2 Nodes (Servers) on a Volume. Lets say the volume info is something like this:
Volume Name: volume-www
Brick1: gluster-server-01:/volume-www/brick
Brick2: gluster-server-02:/volume-www/brick
From the Client, as we know, we have to mount the volume volume-www by mounting from one Server. Like:
mount -t glusterfs gluster-server-01:/volume-www /var/www
I still feel there's a choke point since i am connecting to that gluster-server-01 only.
What if it is FAILED?
Ofcourse I can manually mount from another healthy Server again. But is there a smarter way (industrial approach) to solve this?
|
When you are doing this:
mount -t glusterfs gluster-server-01:/volume-www /var/www
You are initially connecting to one of the nodes that make up the Gluster volume, but the Gluster Native Client (which is FUSE-based) receives information about the other nodes from gluster-server-01. Since the client now knows about the other nodes it can gracefully handle a failover scenario.
| GlusterFS how to failover (smartly) if a mounted Server is failed? |
1,377,294,573,000 |
So, not concerning ourselves with the WHY, and more so with the HOW, I'd like to see if anyone knows where I'm going wrong here.
Basically, I'd like to forward all packets headed for port 80 on an IP that I've aliased to the loopback device (169.254.169.254) to be forwarded to port 8080 on another IP, which happens to be the public IP of the same box (we'll use 1.1.1.1 for the purpose of this question). In doing so, I should [ostensbily] be able to run
telnet 169.254.169.254 80
and reach 1.1.1.1:8080, however, this is not happening.
Here is my nat table in iptables:
~# iptables -nvL -t nat
Chain PREROUTING (policy ACCEPT 66 packets, 3857 bytes)
pkts bytes target prot opt in out source destination
0 0 DNAT tcp -- * * 0.0.0.0/0 169.254.169.254 tcp dpt:80 to:1.1.1.1:8080
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Am I missing something? I've followed most the information in the iptables man pages and also in the links below, however, am still getting a "connection refused" during my telnet attempts. I have tried adding ~#iptables -t nat -A POSTROUTING -j MASQUERADE to my iptables, but to no avail :/
If anyone could point me in the right direction that would be phenomenal!
http://linux-ip.net/html/nat-dnat.html
https://www.frozentux.net/iptables-tutorial/chunkyhtml/x4033.html
EDIT I wanted to add that I do indeed have the following sysctl parameter enabled ~# sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1
EDIT No. 2 I was able to solve this by adding the rule to the OUTPUT chain in the nat table, v.s. the PREROUTING chain as I originally tried.
|
I actually got this working by ensuring the kernel module "br_netfilter" was loaded on the host machine. It was as simple as that, annoyingly simple after being stumped for so long.
Documentation/articles that lead me to the solution:
1)
https://github.com/omribahumi/libvirt_metadata_api and https://thewiringcloset.wordpress.com/2013/03/27/linux-iptable-snat-dnat/
See "Setting up iptables" section – I'm using 'DNAT' instead of 'REDIRECT' bc 'REDIRECT' simply redirects traffic to an interface on the local system, rather than forwarding to a removed address as in the case of destination NAT.
2)
https://serverfault.com/questions/179200/difference-beetween-dnat-and-redirect-in-iptables and https://www.netfilter.org/documentation/HOWTO/NAT-HOWTO-6.html
Helped me understand the diff between REDIRECT & DNAT, as stated above.
3)
https://github.com/omribahumi/libvirt_metadata_api/pull/4/files
This commit led me to the actual solution (the above things I'd already been doing before, but to no avail).
| Unable to get NAT working via iptables PREROUTING chain |
1,377,294,573,000 |
I have an OpenWRT gateway (self-built 19.07, kernel 4.14.156) that sits on a public IP address in front of my private network. I am using nftables (not iptables).
I would like to expose a non-standard port on the public address, and forward it to a standard port on a machine behind the gateway. I think this used to be called port forwarding: it would look like your gateway machine was providing, say, http service, but it was really a machine behind the gateway on a private address.
Here is my nftables configuration. For these purposes, my "standard service" is on port 1234, and I want to allow the public to access it at gateway:4321.
#!/usr/sbin/nft -ef
#
# nftables configuration for my gateway
#
flush ruleset
table raw {
chain prerouting {
type filter hook prerouting priority -300;
tcp dport 4321 tcp dport set 1234 log prefix "raw " notrack;
}
}
table ip filter {
chain output {
type filter hook output priority 100; policy accept;
tcp dport { 1234, 4321 } log prefix "output ";
}
chain input {
type filter hook input priority 0; policy accept;
tcp dport { 1234, 4321 } log prefix "input " accept;
}
chain forward {
type filter hook forward priority 0; policy accept;
tcp dport { 1234, 4321 } log prefix "forward " accept;
}
}
table ip nat {
chain prerouting {
type nat hook prerouting priority 0; policy accept;
tcp dport { 1234, 4321 } log prefix "nat-pre " dnat 172.23.32.200;
}
chain postrouting {
type nat hook postrouting priority 100; policy accept;
tcp dport { 1234, 4321 } log prefix "nat-post ";
oifname "eth0" masquerade;
}
}
Using this setup, external machines can access the private machine at gateway:1234. Logging shows nat-pre SYN packet from external to gateway IP, then forward from external to internal IP, then nat-post from external to internal, and 'existing-connection` takes care of the rest of the packets.
External machines connecting to gateway:4321 log as raw, where the 4321 gets changed to 1234. Then the SYN packet gets forwarded to the internal server, the reply SYN packet comes back, and ... nothing!
The problem, I think, is that I'm not doing the nftables configuration that would change the internal:1234 back to gateway:4321, which the remote machine is expecting. Even if masquerade changes internal:1234 to gateway:1234, the remote machine is not expecting that, and will probably dump it.
Any ideas for this configuration?
|
You are not translating the port number. When the external connection is to port 1234, this is not a problem. But when it is to 4321, the dnat passes through to port 4321 on the internal server, not port 1234. Try
tcp dport { 1234, 4321 } log prefix "nat-pre " dnat 172.23.32.200:1234;
You do not need to translate the reply packets coming back from your internal server. This is done automagically using the entry in the connection tracking table that is created on the first syn packet.
| Port forwarding & NAT with nftables |
1,377,294,573,000 |
We have say 4 users in a private network connected to the Internet trough a Linux router with a public IP address that is doing network address translation (NAT). I have to configure QoS to give access to the users to Internet but with throttled bandwidth for 2 users while for others no limitations.
eth0:121.51.26.35
eth1:10.239.107.1
eth0 of Linux Router is a 10Mbps link. eth1 is connected to switch and 4 nodes are connected to the switch.
I want configure tc to throttle bandwidth of 2 nodes only i.e. a group of users (XyZ in picture) to use only 3Mbps cumulatively. (When 1 user will be downloading/uploading, he/she must get 3Mbps but when 3 users are downloading/uploading simultaneously they must receive 1MBps )
First please let me know is the requirement achievable,
if yes how shall I proceed?
Below is the topology
|
You need to pick a class aware qdisc like HFSC or HTB.
Then you'll have to build a class tree like this:
Root Class (10MBit)
|
\--- XyZ Class (rate 3Mbit ceil 3Mbit)
| |
| \--- Client 10 (rate 1.5Mbit ceil 3Mbit)
| \--- Client 11 (rate 1.5Mbit ceil 3Mbit)
|
\--- Client 30 (rate 3.5Mbit ceil 10Mbit)
\--- Client 40 (rate 3.5Mbit ceil 10Mbit)
And that on both interfaces (for upload and download shaping).
With HTB to get predictable results you should make sure that the sum of children is always equal to parent. So Root has 10Mbit, and its direct children equal (Xyz 3Mbit + Client30 3.5Mbit + Client40 + 3.5Mbit == 10Mbit). Likewise XyZ has 3Mbit and its children Client10+Client11.
Years and years ago I wrote a script that did something similar:
https://github.com/frostschutz/FairNAT
It's unmaintained today, but maybe it can give you some ideas anyway.
Traffic shaping in Linux was a bit of an neglected/esoteric field, hard to find good documentation too. Not sure if that ever changed...
There is http://lartc.org/ (ignore the wondershaper part)
and the Kernel Packet Traveling Diagram http://www.docum.org/docum.org/kptd/ (also the FAQ)
Or if that's all to complicated, maybe a stateless qdisc like ESFQ will do the trick for you. It tries to achieve some kind of equilibrium between clients without actually applying any hard bandwidth limits.
Good luck.
| How to configure QoS per IP basis? |
1,377,294,573,000 |
In my host I have:
networking.nat.enable = true;
networking.nat.internalInterfaces = ["ve-+"];
networking.nat.externalInterface = "wlp2s0f0u8";
In my container I have defined:
containers.nixbincache = {
privateNetwork = true;
hostAddress = "192.168.140.10";
localAddress = "192.168.140.11";
...
However the container has no external access to the internet. How can I enable external access?
Doing some network debugging:
On the container:
curl -v 116.203.70.99
On the host:
sudo tshark -f "tcp port 80" -i ve-nixbincache
Running as user "root" and group "root". This could be dangerous.
Capturing on 've-nixbincache'
1 0.000000000 192.168.140.11 → 116.203.70.99 TCP 74 59266 → 80 [SYN] Seq=0 Win=29200 Len=0 MSS=1460 SACK_PERM=1 TSval=1266433161 TSecr=0 WS=128
2 1.062641113 192.168.140.11 → 116.203.70.99 TCP 74 [TCP Retransmission] 59266 → 80 [SYN] Seq=0 Win=29200 Len=0 MSS=1460 SACK_PERM=1 TSval=1266434223 TSecr=0 WS=128
3 3.110640768 192.168.140.11 → 116.203.70.99 TCP 74 [TCP Retransmission] 59266 → 80 [SYN] Seq=0 Win=29200 Len=0 MSS=1460 SACK_PERM=1 TSval=1266436271 TSecr=0 WS=128
4 7.142641875 192.168.140.11 → 116.203.70.99 TCP 74 [TCP Retransmission] 59266 → 80 [SYN] Seq=0 Win=29200 Len=0 MSS=1460 SACK_PERM=1 TSval=1266440303 TSecr=0 WS=128
or with tcpdump:
sudo tcpdump -i ve-nixbincache
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ve-nixbincache, link-type EN10MB (Ethernet), capture size 262144 bytes
20:27:27.351572 IP nixbincache.containers.60420 > static.99.70.203.116.clients.your-server.de.http: Flags [S], seq 1100520269, win 29200, options [mss 1460,sackOK,TS val 1273487804 ecr 0,nop,wscale 7], length 0
20:27:28.399000 IP nixbincache.containers.60420 > static.99.70.203.116.clients.your-server.de.http: Flags [S], seq 1100520269, win 29200, options [mss 1460,sackOK,TS val 1273488851 ecr 0,nop,wscale 7], length 0
20:27:30.447027 IP nixbincache.containers.60420 > static.99.70.203.116.clients.your-server.de.http: Flags [S], seq 1100520269, win 29200, options [mss 1460,sackOK,TS val 1273490899 ecr 0,nop,wscale 7], length 0
20:27:32.367015 ARP, Request who-has blueberry tell nixbincache.containers, length 28
20:27:32.367029 ARP, Reply blueberry is-at 66:3f:59:d4:10:c5 (oui Unknown), length 28
20:27:34.479001 IP nixbincache.containers.60420 > static.99.70.203.116.clients.your-server.de.http: Flags [S], seq 1100520269, win 29200, options [mss 1460,sackOK,TS val 1273494931 ecr 0,nop,wscale 7], length 0
20:27:42.606992 IP nixbincache.containers.60420 > static.99.70.203.116.clients.your-server.de.http: Flags [S], seq 1100520269, win 29200, options [mss 1460,sackOK,TS val 1273503059 ecr 0,nop,wscale 7], length 0
On the host:
iptables -L -v -t nat
Chain PREROUTING (policy ACCEPT 4487 packets, 758K bytes)
pkts bytes target prot opt in out source destination
4488 758K nixos-nat-pre all -- any any anywhere anywhere
2 120 DOCKER all -- any any anywhere anywhere ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 17558 packets, 1296K bytes)
pkts bytes target prot opt in out source destination
0 0 DOCKER all -- any any anywhere !127.0.0.0/8 ADDRTYPE match dst-type LOCAL
Chain POSTROUTING (policy ACCEPT 17584 packets, 1299K bytes)
pkts bytes target prot opt in out source destination
0 0 MASQUERADE all -- any !docker0 172.17.0.0/16 anywhere
15 960 MASQUERADE all -- any !br-3a2c30a19c92 172.20.0.0/16 anywhere
34 2615 MASQUERADE all -- any !br-88eb2b109258 172.18.0.0/16 anywhere
42 3423 MASQUERADE all -- any !br-8510145730df 172.19.0.0/16 anywhere
17584 1299K LIBVIRT_PRT all -- any any anywhere anywhere
17604 1300K nixos-nat-post all -- any any anywhere anywhere
Chain DOCKER (2 references)
pkts bytes target prot opt in out source destination
0 0 RETURN all -- docker0 any anywhere anywhere
0 0 RETURN all -- br-3a2c30a19c92 any anywhere anywhere
0 0 RETURN all -- br-88eb2b109258 any anywhere anywhere
0 0 RETURN all -- br-8510145730df any anywhere anywhere
Chain LIBVIRT_PRT (1 references)
pkts bytes target prot opt in out source destination
0 0 RETURN all -- any any 192.168.122.0/24 base-address.mcast.net/24
0 0 RETURN all -- any any 192.168.122.0/24 255.255.255.255
0 0 MASQUERADE tcp -- any any 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535
0 0 MASQUERADE udp -- any any 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535
0 0 MASQUERADE all -- any any 192.168.122.0/24 !192.168.122.0/24
Chain nixos-nat-post (1 references)
pkts bytes target prot opt in out source destination
0 0 MASQUERADE all -- any wlp2s0f0u8 anywhere anywhere mark match 0x1
Chain nixos-nat-pre (1 references)
pkts bytes target prot opt in out source destination
72 5330 MARK all -- ve-+ any anywhere anywhere MARK set 0x1
On the container:
route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.140.10 0.0.0.0 UG 0 0 0 eth0
192.168.140.10 0.0.0.0 255.255.255.255 UH 0 0 0 eth0
On the container traceroute:
[root@nixbincache:~]# traceroute 8.8.8.8
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
1 _gateway (192.168.140.10) 0.043 ms 0.010 ms 0.009 ms
2 * * *
3 * * *
4 * * *
5 * * *
6 * * *
7 * * *
8 * * *
9 * * *
10 * * *
11 * * *
12 * * *
13 * * *
14 * * *
15 * * *
16 * * *
17 * * *
18 * * *
19 * * *
20 * * *
21 * * *
22 * * *
23 * * *
24 * * *
25 * * *
26 * * *
27 * * *
28 * * *
29 * * *
30 * * *
|
It worked by running iptables -t nat -A POSTROUTING -o wlp2s0f0u7 -j MASQUERADE.
Here is the output iptable rules:
sudo iptables -L -v -t nat
Chain PREROUTING (policy ACCEPT 154 packets, 22783 bytes)
pkts bytes target prot opt in out source destination
203 29566 nixos-nat-pre all -- any any anywhere anywhere
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 26 packets, 2466 bytes)
pkts bytes target prot opt in out source destination
Chain POSTROUTING (policy ACCEPT 2 packets, 400 bytes)
pkts bytes target prot opt in out source destination
66 5673 nixos-nat-post all -- any any anywhere anywhere
25 2126 MASQUERADE all -- any wlp2s0f0u7 anywhere anywhere
Chain DOCKER (0 references)
pkts bytes target prot opt in out source destination
Chain LIBVIRT_PRT (0 references)
pkts bytes target prot opt in out source destination
Chain nixos-nat-post (1 references)
pkts bytes target prot opt in out source destination
0 0 MASQUERADE all -- any wlp2s0f0u8 anywhere anywhere mark match 0x1
Chain nixos-nat-pre (1 references)
pkts bytes target prot opt in out source destination
2 120 MARK all -- ve-+ any anywhere anywhere MARK set 0x1
| How to enable internet access for nixos container with private network |
1,377,294,573,000 |
I have a physical network with a Linux server (Ubuntu 16.04, kernel 4.13) and several gadgets on it. Each gadget has the same unchangeable static IP, e.g. 192.168.0.222/24. I would like to communicate with all these gadgets via an arbitrary IP protocol (e.g. ICMP ping or a custom UDP protocol)
Fortunately I have a managed network switch connecting the server and the gadgets. I've configured the switch to have a trunk port for the server and access ports for each gadget, each on a different VLAN (VIDs 11, 12, etc).
I have added 8021q to /etc/modules and set up VLAN entries in /etc/network/interfaces:
auto eno2 # For switch management interface
iface eno2 inet static
address 192.168.2.2/24
auto eno2.11 # Gadget 1 (only)
iface eno2 inet static
address 192.168.0.1/24
#auto eno2.12 # Gadget 2 - disabled
#iface eno2 inet static
# address 192.168.0.1/24
With the entries as shown above, I can communicate with gadget 1 (e.g. ping 192.168.0.222) and don't see any traffic from gadget 2.
But I'd like to be able to communicate with all gadgets at the same time, and be able to distinguish one from the other. They don't need to talk to each other. I was thinking for each gadget I could create a unique host IP and subnet, e.g.
Host IP & subnet "Fake" gadget IP Actual gadget IP VLAN Interface
192.168.101.1/24 192.168.101.222 192.168.0.222 eno2.11
192.168.102.1/24 192.168.102.222 192.168.0.222 eno2.12
I'd use iptables or nftables to handle the translation in each direction. Then I could ping 192.168.101.222 to reach gadget 1, and ping 192.168.102.222 to reach gadget 2. From each gadget's point of view, its own IP would still be 192.168.0.222 and it would see the ICMP echo requests coming from 192.168.0.1.
This seems like a somewhat unusual variant on NAT. Note the traffic with the "fake" IPs doesn't need to (and shouldn't) leave the server - we're not forwarding to something else on the network.
Is this a reasonable approach to the problem?
How do I set up /etc/network/interfaces and iptables or nftables to achieve this?
|
I was able to achieve this with the following nftables ruleset (I had to build nft from source as v0.5 which ships with Ubuntu 16.04 doesn't support packet field mangling) :
table ip mytable {
chain prerouting {
type filter hook prerouting priority -300; policy accept;
iifname "eno2.11" ip saddr 192.168.0.222 ip saddr set 192.168.101.222
iifname "eno2.12" ip saddr 192.168.0.222 ip saddr set 192.168.102.222
iifname "eno2.13" ip saddr 192.168.0.222 ip saddr set 192.168.103.222
}
chain output {
type filter hook output priority -300; policy accept;
ip daddr 192.168.101.222 ip daddr set 192.168.0.222
ip daddr 192.168.102.222 ip daddr set 192.168.0.222
ip daddr 192.168.103.222 ip daddr set 192.168.0.222
}
}
and the following entries in /etc/network/interfaces:
auto eno2 # For switch management interface
iface eno2 inet static
address 192.168.2.2/24
auto eno2.11
iface eno2.11 inet static
address 192.168.101.1
netmask 255.255.255.0
auto eno2.12
iface eno2.12 inet static
address 192.168.102.1
netmask 255.255.255.0
auto eno2.13
iface eno2.13 inet static
address 192.168.103.1
netmask 255.255.255.0
This doesn't "unmangle" the source IP of outgoing packets, i.e. the gadgets still see requests from the server as coming from 192.168.101.1, 192.168.102.1 etc rather than 192.168.0.1 - in my application this doesn't matter but it could probably be addressed with additional rules in the output chain.
| nftables / iptables rules to rewrite source IP by interface |
1,377,294,573,000 |
The situation involves 3 machines:
A Some laptop connected somewhere to the Internet via any mean
B A server connected to the Internet through a standard ISP (static IP provided by dyndns: myserver.dyndns.com)
C Another server connected
to the internet via a 4G Dongle
A <--- ISP1 --- ISP 2 ---> B <--- ISP 2 --- 4G ---> C
As the 4G dongle rejects new incoming connections, I put in place an autossh channel to connect from A to C via B:
autossh -M 0 -N [email protected] -R 10022:127.0.0.1:22 -R 10000:127.0.0.1:10000
That works great.
Now, I would like to access the 4G dongle's web interface by typing
myserver.dyndns.com:80
So I tried NATing things:
On B:
iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to-destination 127.0.0.1:10000
and
iptables -t nat -A POSTROUTING -d 127.0.0.1 --dport 10000 -j MASQUERADE`
On C:
iptables -t nat -A PREROUTING -p tcp --dport 10000 -j DNAT --to-destination 192.168.8.1:80
and
iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE
Note: eth1 is the 4G dongle's interface, C's IP on that interface is 192.168.8.100 and the dongle's is 192.168.8.1.
Unfortunately, that doesn't work. I also activated IP forwarding:
echo 1 > /proc/sys/net/ipv4/ip_forward
When typing
iptables -t nat -L -v -n
on B and C, only the PREROUTING line of B sees its packet count increase after each attempt.
This may be due to a non-complete understanding of how netfilter works .
I'd appreciate any help you could provide!
|
Please, could you provide iptables -t filter -nvL outputs on servers B and C?
I guess the autossh channel runs on server C. Is it right? If so, I suggest a different approach. On B, you need a REDIRECT rule, because the kernel will not allow an unprivileged user to open the port 80.
iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-ports 10000
iptables -t filter -A INPUT -p tcp --dport 10000 -j ACCEPT
(EDIT): On server B, GatewayPorts must be enabled in /etc/ssh/sshd_config:
# /etc/ssh/sshd_config
GatewayPorts clientspecified
On server C, forward connections directly to the dongle by modifying autossh arguments:
autossh -M 0 -N [email protected] -R 10022:127.0.0.1:22 \
-R :10000:192.168.8.1:80
The only error I see in your setup resides on the PREROUTING chain rule of server C. In this scenario, it will not be evaluated because it affects only packets that enters through network interfaces. Connections created by ssh are locally generated, so they would be affected by rules in OUTPUT chain.
| IP forwarding through ssh tunnel |
1,377,294,573,000 |
I need to find a way to copy files from mymachine to a server priv-server sitting on a private NATted network via a server pub-server with a public IP. The behind-NAT machine priv-server only has certs for user@mymachine, so the certs need to be forwarded from mymachine via pub-server to priv-server
So in order to log on with SSH with just one command, I use:
$ ssh -tA user@pub-server 'ssh user@priv-server'
— this works perfectly well. The certs are forwarded from mymachine to priv-server via pub-server, and all is set up nicely.
Now, I'd normally use scp for any file transfer needs but I'm not aware of a way to pass all of the tunneling information to scp.
|
Instead use a more low level form of copying files by catting them locally, and piping that into a remote cat > filename command on priv-server:
$ cat file1.txt | ssh -A user@pub-server 'ssh user@priv-server "cat > file1.txt"'
or with compression:
$ gzip -c file1.txt | ssh -A user@pub-server 'ssh user@priv-server "gunzip -c > file1.txt"'
Outtake from man ssh:
-A Enables forwarding of the authentication agent connection. This can also be specified on a per-host basis in a configuration file.
-t Force pseudo-tty allocation. This can be used to execute arbitrary screen-based programs on a remote machine, which can be very useful, e.g. when implementing menu services. Multiple -t options force tty allocation, even if ssh has no local tty.
I initially wasn't aware of an answer, but after a good night's sleep and writing this question, I saw a problem with the command I was trying initially, fixed it, and it worked. But as this seems like a useful thing to do, I decided to share the answer.
| Copying a file using SSH over a tunnel with cert forwarding |
1,377,294,573,000 |
Given: I have a machine (HostA) with only one NIC which has Internet connectivity. I have another machine (HostB) with one NIC on the same switch. HostB is not configured for Internet access yet. HostA has its default gateway and DNS servers appropriately configured. IPv4 is being used. OSes on the hosts are Ubuntu 13 and Fedora17.
What I want: Now, I would like HostB to have Internet connectivity, too. Is this possible using 'some' combination of iptables, virtual tun/tap devices, or a VPN setup between HostA and HostB, etc?
What I already know and can do: Currently, I can use an ssh-based SOCKS proxy on HostB (ssh -D 9050 UserA@HostA) and route traffic of select 'socksifiable' applications on HostB via this proxy to HostA and beyond. However, sadly, not all applications are socksifiable. Now, I know very well that if HostA had 2 NICs, I could have used some iptables rules to convert HostA into a gateway that would then route traffic between its NIC-1 and NIC-2 (where NIC-1 would be connected to HostB and NIC-2 to Internet). But installing another NIC in HostA is not feasible for me.
PS: I had posted this earlier on superuser.com but got no useful information.
edit 1:
network information
Host A:
:> ip addr
[...]
2: p4p1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether d4:be:d9:d5:46:05 brd ff:ff:ff:ff:ff:ff
inet 192.168.22.9/24 brd 192.168.22.255 scope global p4p1
:> ip route
default via 192.168.22.254 dev p4p1 proto static
192.168.22.0/24 dev p4p1 proto kernel scope link src 192.168.22.9
Host B:
:> ip addr
[...]
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 30:f9:ed:d9:2e:20 brd ff:ff:ff:ff:ff:ff
inet 192.168.22.234/24 brd 192.168.22.255 scope global eth0
:> ip route
169.254.0.0/16 dev eth0 scope link metric 1000
192.168.22.0/24 dev eth0 proto kernel scope link src 192.168.22.234 metric 1
|
Routing is about "where (and if) to send to". That's not limited to selecting a NIC. In your case routing is very simple though.
You need masquerading in its most simple form (all commands on host A):
iptables -t nat -I POSTROUTING -s 192.168.22.234 -j MASQUERADE
And maybe (if not yet) you need allow forwarding:
iptables -I FORWARD 1 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
iptables -I FORWARD 2 -s 192.168.22.234 -j ACCEPT
Assuming host A is configured as the default gateway for Host B.
Edit 1:
After a chat discussion the situation has become clearer. In theory configuring the default gateway on B should have been enough. But it seems that the gateway (which is not under the control of the questioner) blocks host B. Thus the masquerading solution was necessary.
| Can iptables be used to convert a single-homed host into a NAT server? |
1,377,294,573,000 |
I have a simple Linux router with multiple NICs and IPv4 forwarding enabled.
The router has two static WAN IP addresses, assigned to one interface (eth0, eth0:0). (In the following text, I will obfuscate the actual public IP addresses (257 as an octet).)
The router can be pinged on both external WAN IP addresses from the outside Internet.
Interfaces:
eth0: Internet connection, 134.257.10.10/24, gateway 134.257.10.1
eth0:0: Second IP address on that interface: 134.257.10.20/24
eth1: LAN 1, 192.168.1.1/24
eth2: LAN 2, 192.168.2.1/24
eth3: LAN 3, 192.168.3.1/24
My setup works and all LAN clients (LAN 1-3) can access the Internet and are externally seen as 134.257.10.10. Moreover, I have two incoming port forwardings.
My iptables NAT table looks like this:
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
# Port forwarding:
-A PREROUTING -i eth0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.1.10:80
-A PREROUTING -i eth0:0 -p tcp -m tcp --dport 25 -j DNAT --to-destination 192.168.3.33:25
-A POSTROUTING -o eth0 -j MASQUERADE
COMMIT
How can I have the LAN3 clients (eth3) appear as 134.257.10.20 on the Internet (eth0:0) for outgoing connections instead of 134.257.10.10 (eth0)?
|
Use SNAT instead of MASQUERADE
... in order to choose something else than the default.
Instead of using MASQUERADE for the generic case (all other LANs), add a SNAT exception for LAN3 clients. This must match before the other nat/POSTROUTING rule in order to override it so -I is used below instead of -A to apply at the correct place on the existing ruleset (mind the bogus 257):
iptables -t nat -I POSTROUTING -s 192.168.3.0/24 -o eth0 -j SNAT --to-source 134.257.10.20
iptables (contrary to nftables) cannot match the input interface of a routed packet in a POSTROUTING hook, so -i eth3 can't be used above, and the match is done by checking the original IP address source instead.
Addressing a problem with eth0:0
While at it, fix the incorrect use of so-called alias interface name, which is a concept that exists only for compatibility with Linux' ifconfig command which use has been obsolete for more than 20 years on Linux but is still around. Indeed on Linux ifconfig cannot handle more than one IPv4 address on an interface and this workaround has been here to overcome it. eth0:0 is actually seen by anything else than ifconfig, including the kernel, as the address 134.257.10.20/24 set on eth0 with a label associated eth0:0. This secondary address could have been added like this (after the main address was already put in place) with the modern ip addr equivalent:
ip addr add 134.257.10.20/24 brd + label eth0:0 dev eth0
This matters because iptables won't match correctly with a rule using eth0:0. So it has to be replaced withing iptables with a check on the interface: eth0 plus a check on the IP address in the same rule.
So if the port 80 is intended to reach 192.168.1.10:80 only for the first public IP address and not both, replace:
-A PREROUTING -i eth0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.1.10:80
with:
-A PREROUTING -i eth0 -d 134.257.10.10 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.1.10:80
If it's for both addresses, then the initial rule is ok.
But for sure the rule for port 25 should be rewritten like this:
-A PREROUTING -i eth0 -d 134.257.10.20 -p tcp -m tcp --dport 25 -j DNAT --to-destination 192.168.3.33:25
The match has to be done on the actual interface (eth0) and the address, because that's what eth0:0 is: the address and not an interface.
The final ruleset becomes then (mind the bogus 257 of course):
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
-A PREROUTING -d 134.257.10.10/32 -i eth0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.1.10:80
-A PREROUTING -d 134.257.10.20/32 -i eth0 -p tcp -m tcp --dport 25 -j DNAT --to-destination 192.168.3.33:25
-A POSTROUTING -s 192.168.3.0/24 -o eth0 -j SNAT --to-source 134.257.10.20
-A POSTROUTING -o eth0 -j MASQUERADE
COMMIT
| NAT router with 2 external WAN IPs A+B and multiple internal LANs: Let 1 LAN use external IP address B, all other A |
1,377,294,573,000 |
I has a JioFi Wifi Router which is Double NAT ISP. So, I have a WAN ip
and external public ip.
Whenever I do port forwarding the specific port, only it affects WAN ip.So As a Result I can't access my web server...
|
Your ISP does NAT between its public IP (157.50.xx.xx) and the ISP internal private IP (10.89.xx.xx). Unless you can convince your ISP (by switching to the appropriate plan, usually for business, that gives you a personal public IP and paying more money, or whatever) to port forward the ports you need, there's nothing you can do.
The router you are using (JioFi, or whatever) hasn't got anything to do with this at all.
Other options are renting a server with a public IP somewhere. Or using IPv6, if your ISP doesn't NAT this (some do).
| How To Configure a noip on linux, if I has Double NAT ISP Like JioFi? |
1,377,294,573,000 |
Similar questions have been asked before but my setup is a little different and the solutions to those questions are not working. A have a CentOS 6 server running iptables with 5 interfaces:
eth0: Management 136.2.188.0/24
eth1: Cluster1 internal 10.1.0.0/16
eth2: Cluster1 external 136.2.217.96/27
eth3: Cluster2 internal 10.6.0.0/20
eth4: Cluster2 external 136.2.178.32/28
What I'm trying to do is to have traffic from eth1 to go out eth2 and be NATd, traffic from eth3 go out eth4 and be NATd, all other traffic (e.g. SSH to the box itself) use eth0.
To do that I configured route tables like so:
ip route add default via 136.2.178.33 src 136.2.178.37 table 1
ip route add default via 136.2.217.97 src 136.2.217.124 table 2
ip rule add fwmark 1 pref 1 table 1
ip rule add fwmark 2 pref 2 table 2
The source IPs are those of the NAT box. The regular default route the management interface will use is in table 0 as it usually is.
I then configured iptables to mark packets using the mangle table so that they use a specific route table (if I am understanding this correctly) and NAT particular source traffic to a particular interface:
iptables -A PREROUTING -t mangle -j CONNMARK --restore-mark
iptables -A PREROUTING -t mangle -m mark --mark 0x0 -s 10.6.0.0/20 -j MARK --set-mark 1
iptables -A PREROUTING -t mangle -m mark --mark 0x0 -s 10.1.0.0/16 -j MARK --set-mark 2
iptables -A POSTROUTING -t mangle -j CONNMARK --save-mark
iptables -A POSTROUTING -t nat -s 10.6.0.0/20 -o eth4 -j MASQUERADE
iptables -A POSTROUTING -t nat -s 10.1.0.0/16 -o eth2 -j MASQUERADE
iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A FORWARD -j LOG --log-level debug
iptables -A FORWARD -m state --state NEW -s 10.6.0.0/20 -o eth4 -j ACCEPT
iptables -A FORWARD -m state --state NEW -s 10.1.0.0/16 -o eth2 -j ACCEPT
iptables -A FORWARD -j DROP
When I test this (a simple wget of google.com from a client machine) I can see traffic come in the internal interface (eth3 in the test), then go out the external interface (eth4) with the NAT box's external IP as the source IP. So, the NAT itself works. However, when the system receives the response packet it comes in eth4 as it should but then nothing happens, it never gets un-NAT'd and never shows up on eth3 to go back to the client machine.
Internal interface:
11:52:08.570462 IP 10.6.0.50.50783 > 74.125.198.103.80: Flags [S], seq 4030201376, win 14600, options [mss 1460,sackOK,TS val 34573 ecr 0,nop,wscale 7], length 0
11:52:09.572867 IP 10.6.0.50.50783 > 74.125.198.103.80: Flags [S], seq 4030201376, win 14600, options [mss 1460,sackOK,TS val 35576 ecr 0,nop,wscale 7], length 0
11:52:11.576943 IP 10.6.0.50.50783 > 74.125.198.103.80: Flags [S], seq 4030201376, win 14600, options [mss 1460,sackOK,TS val 37580 ecr 0,nop,wscale 7], length 0
11:52:15.580846 IP 10.6.0.50.50783 > 74.125.198.103.80: Flags [S], seq 4030201376, win 14600, options [mss 1460,sackOK,TS val 41584 ecr 0,nop,wscale 7], length 0
11:52:23.596897 IP 10.6.0.50.50783 > 74.125.198.103.80: Flags [S], seq 4030201376, win 14600, options [mss 1460,sackOK,TS val 49600 ecr 0,nop,wscale 7], length 0
External interfaces:
11:52:08.570524 IP 136.2.178.37.50783 > 74.125.198.103.80: Flags [S], seq 4030201376, win 14600, options [mss 1460,sackOK,TS val 34573 ecr 0,nop,wscale 7], length 0
11:52:08.609213 IP 74.125.198.103.80 > 136.2.178.37.50783: Flags [S.], seq 1197168065, ack 4030201377, win 42540, options [mss 1380,sackOK,TS val 1835608368 ecr 34573,nop,wscale 7], length 0
11:52:08.909188 IP 74.125.198.103.80 > 136.2.178.37.50783: Flags [S.], seq 1197168065, ack 4030201377, win 42540, options [mss 1380,sackOK,TS val 1835608668 ecr 34573,nop,wscale 7], length 0
11:52:09.572882 IP 136.2.178.37.50783 > 74.125.198.103.80: Flags [S], seq 4030201376, win 14600, options [mss 1460,sackOK,TS val 35576 ecr 0,nop,wscale 7], length 0
11:52:09.611414 IP 74.125.198.103.80 > 136.2.178.37.50783: Flags [S.], seq 1197168065, ack 4030201377, win 42540, options [mss 1380,sackOK,TS val 1835609370 ecr 34573,nop,wscale 7], length 0
11:52:11.576967 IP 136.2.178.37.50783 > 74.125.198.103.80: Flags [S], seq 4030201376, win 14600, options [mss 1460,sackOK,TS val 37580 ecr 0,nop,wscale 7], length 0
So, why is traffic getting out but iptables is not sending the return traffic back to the client? It seems that routing is correct since packets leave and arrive on the correct interfaces, so what is iptables doing with the return traffic?
|
OK, I figured it out. What I had to do was add the internal subnet route to each route table then set rules to control what interface traffic routes to/from. Then in iptables marking packets with the mangle table was not needed, just the typical forward and nat rules.
ip route add 136.2.178.32/28 dev eth4 table 1
ip route add 10.6.0.0/20 dev eth3 table 1
ip route add default via 136.2.178.33 src 136.2.178.37 table 1
ip rule add iif eth4 table 1
ip rule add from 10.6.0.0/20 table 1
ip route add 136.2.217.96/28 dev eth2 table 2
ip route add 10.1.0.0/16 dev eth1 table 2
ip route add default via 136.2.217.113 src 136.2.217.124 table 2
ip rule add iif eth2 table 2
ip rule add from 10.1.0.0/16 table 2
iptables -A FORWARD -i eth2 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -i eth1 -o eth2 -m state --state NEW -j LOG --log-level debug
iptables -A FORWARD -i eth1 -o eth2 -j ACCEPT
iptables -A FORWARD -i eth4 -o eth3 -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -i eth3 -o eth4 -m state --state NEW -j LOG --log-level debug
iptables -A FORWARD -i eth3 -o eth4 -j ACCEPT
iptables -A FORWARD -j REJECT --reject-with icmp-host-prohibited
iptables -t nat -A POSTROUTING -o eth2 -j MASQUERADE
iptables -t nat -A POSTROUTING -o eth4 -j MASQUERADE
| NAT box with multiple internal and external interfaces |
1,377,294,573,000 |
My home network currently looks like this:
I'd like to restructure it to look like this (get rid of the ISP switch and plug the STBs into mine):
My gateway is a PC with two Ethernet ports (doing NAT and providing DHCP and DNS to my LAN) and is running GNU/Linux.
The obstacle is the ISP's STBs (set-top boxes, like cable boxes but using IPTV over Ethernet). These send a DHCP request which, when received by the ISP gateway, is answered with an internal IP (10.x.y.z) instead of a WAN one. The IPTV portal these devices try to connect to is only accessible from the internal IP.
So, what I need to effectively do is make my gateway behave like a switch (bridge?) for devices with certain MAC addresses.
I'm guessing I would need to do the following:
Add iptables rules to directly copy packets with source addresses matching STBs' MACs from the LAN to the WAN interface
Add iptables rules to directly copy packets with destination addresses matching STBs' MACs from the WAN to the LAN interface (i.e. the opposite of the above)
Make sure the gateway's DHCP server does not reply to the STBs' DHCP requests
Exempt packets from STBs' MACs from NAT-ing?
Do I need to worry about ARP requests?
|
To answer the question as stated, I got some advice from a friend network guru to bridge the LAN and WAN interfaces, then use ebtables to filter out what is to get bridged:
lan0: [add LAN IPs and use as LAN interface]
eth0 (LAN)
eth1 (WAN) [add WAN IPs and use as WAN interface]
# Forward traffic to/from STBs
ebtables -A FORWARD -i eth0 -o eth1 -s $STB_MAC -j ACCEPT
ebtables -A FORWARD -i eth1 -o eth0 -d $STB_MAC -j ACCEPT
# Allow DHCP responses
ebtables -A FORWARD -i eth0 -o eth0 -d ff:ff:ff:ff:ff:ff -p ipv4 --ip-proto udp --ip-sport 67 --ip-dport 68 -j ACCEPT
# Allow ARP requests
ebtables -A FORWARD -i eth0 -o eth0 -d ff:ff:ff:ff:ff:ff -p arp --arp-ip-dst ! $LAN_SUBNET -j ACCEPT
# The WAN is not really part of the LAN
ebtables -A INPUT -i eth1 -j DROP
ebtables -A FORWARD -i eth1 -j DROP
ebtables -A FORWARD -o eth1 -j DROP
ebtables -A OUTPUT -o eth1 -j DROP
# Allow eth1 to be used to access the WAN
ebtables -t broute -A BROUTING -i eth1 -d $STB_MAC -j ACCEPT
ebtables -t broute -A BROUTING -i eth1 -d $STB_MAC -j ACCEPT
ebtables -t broute -A BROUTING -i eth1 -j DROP
However, in this case it turned out that the ISP switch is doing port-based VLAN tagging (one VLAN for the PC port and another VLAN for the STB ports).
Therefore, to get rid of the ISP router, I needed to:
Set up the two VLANs on the gateway's WAN NIC
Move the WAN IP configuration from the NIC to to the WAN VLAN (incl. the NAT)
Blacklist the STBs' MAC addresses from my gateway's DHCP server
Bridge the ISP's internal VLAN with my LAN
(Optional) Add firewall rules to avoid LAN packets from leaking into the ISP VLAN.
This works because the ISP DHCP server already only replies to STBs' DHCP requests, so blacklisting them on the gateway results in all DHCP replies getting answered by either the gateway, or the ISP DHCP server. If using dnsmasq, blacklisting can be done using e.g. dhcp-host=01:23:45:*:*:*,ignore in /etc/dnsmasq.conf.
Everything else can be done using systemd-networkd configuration files:
- Create .netdev files for VLANs
- Add them (as VLAN entries) to the .network file matching on the WAN NIC
- Create .network files for the VLANs
- Create a .netdev file for the LAN/STB bridge
- Add the STB VLAN and the LAN NIC to the bridge
Here's a full example set of systemd-networkd configuration files:
/etc/systemd/network/isp-link.network - Network configuration for the WAN NIC:
Here we just set up the VLANs. Make sure to change the VLAN IDs to the one the ISP uses. This example uses 1234 for the WAN VLAN and 56 for the internal (IPTV) one.
[Match]
Name=eno1
[Network]
VLAN=eno1.1234
VLAN=eno1.56
/etc/systemd/network/eno1.1234.netdev - WAN VLAN:
[NetDev]
Name=eno1.1234
Kind=vlan
[VLAN]
Id=1234
/etc/systemd/network/eno1.56.netdev - ISP internal VLAN for IPTV:
[NetDev]
Name=eno1.56
Kind=vlan
[VLAN]
Id=56
/etc/systemd/network/home-bridge.netdev - The ISP-VLAN/home-LAN bridge device:
[NetDev]
Name=br0
Kind=bridge
/etc/systemd/network/home-bridge.network - The network for said bridge:
Make sure to not enable the DHCP server here. As systemd-networkd doesn't allow configuring a blacklist of MACs to ignore, you'll need to use another DHCP server (e.g. dnsmasq) instead.
[Match]
Name=br0
[Network]
Address=192.168.0.1/24
IPForward=yes
IPMasquerade=yes
DHCP=no
DHCPServer=no
/etc/systemd/network/home-lan.network - Network file for the LAN NIC:
[Match]
Name=enp9s0
[Network]
Bridge=br0
/etc/systemd/network/isp-vlan-wan.network - Your gateway's WAN configuration:
This uses DHCP - if you have a static configuration, change this to Address/Gateway/DNS settings.
[Match]
Name=eno1.1234
[Network]
IPForward=yes
DHCP=yes
/etc/systemd/network/isp-vlan-internal.network - The network file for the internal VLAN:
[Match]
Name=eno1.56
[Network]
Bridge=br0
/etc/dnsmasq.conf - DHCP server configuration:
# Disable DNS server - will be handled by systemd-networkd
port=0
# Enable and configure DHCP server
dhcp-range=192.168.0.129,192.168.0.254,12h
# Specify which interfaces to listen on
listen-address=127.0.0.1
listen-address=::1
listen-address=192.168.0.1
# Ignore DHCP requests from the STBs
dhcp-host=01:23:45:*:*:*,ignore
| Exempt some devices from NAT |
1,377,294,573,000 |
There is only one table on the server - "nat" and it contains only two chains: "prerouting" and "postrouting". IP forwarding is enabled. I'm trying to set more specific conditions for the source nat rule. When I set the classic rule :
nft add rule nat postrouting ip saddr 192.168.1.0/24 oif eth0 snat 1.2.3.4
everything works fine. But I'd like to specify also the interface where it's located for the network "saddr 192.168.1.0/24".
nft add rule nat postrouting **iif eth1** ip saddr 192.168.1.0/24 oif eth0 snat 1.2.3.4
When I enter this command, the program accepts it and the rule appears in the table. But the traffic doesn't go. Does anyone have any idea why?
|
This feature also requires kernel >= 5.5 for adequate netfilter support. Description in kernelnewbies.org:
Linux 5.5 was released on 26 Jan 2020
[...]
netfilter
Support iif matches in POSTROUTING commit
From the commit:
netfilter: Support iif matches in POSTROUTING
Instead of generally
passing NULL to NF_HOOK_COND() for input device, pass skb->dev which
contains input device for routed skbs.
Note that iptables (both legacy and nft) reject rules with input
interface match from being added to POSTROUTING chains, but nftables
allows this.
From the description, before this commit, the input interface was not provided by netfilter and the iif expression never matched.
| How to set hard conditions for source nat rules in nftables? |
1,377,294,573,000 |
Basically trying to bend bridging and NATing to my will with quite a unique project.
I've simplified what I'm doing below (VM=Kali virtual machine for testing):
ZoneX's are network namespaces, vexxx's are virtual links created with ip link
The premise is to create a gateway for the LAN which can divert traffic (based on what it is) to either ZoneX or ZoneY modify the traffic and forward it to ZoneZ and finally out to the real networks gateway.
I've tried quite a few different things, however the main problem is either from creating a layer2 storm... not nice in VM's... or the NAT net namespace (ZoneZ) forwards the return traffic via the first interface in the NAT table for the client VM (which is sometimes incorrect).
The main aim is to split the traffic to multiple zones but have the return traffic take the same route back, thats the clincher! The next stage is then to be able to chain multiple Zones together to modify the traffic in multiple ways.
*** EDIT
A connection example would be a DNS lookup to 8.8.8.8 and an TCP request to 8.8.8.8, both from the VM.
Firstly the DNS request passes to eth0 over brA to ve001, to ZoneA where the packet is marked (using iptables) and passed to ve003 > ve004 etc. to ve006 where it is NAT'd and sent out to the internet. When the response returns to ZoneZ (the NAT zone) the lookup in the NAT table is done and the packet is routed to ve006 because the ARP entry for the VM machine points to that interface.
The main trouble comes when I have other traffic I want to forward via the bottom route. Same as before until ZoneA, however this time it is routed down to ve007, through ZoneY and finally into ZoneZ, its then passed over the NAT gw and onto the internet. However, when a reply is received for this connection the packets go to ZoneZ the lookup is done in the NAT table, its translated and then the ARP table lookup is done, this is when it forwards it back via ve006 which is wrong, I want it to go back the way it came (in this case via ve010).
I guess my question should be, can I get the NAT table to record the interface it was presented from and forward it back via that?
|
The solution is to mark new connections and use the mark for policy routing:
iptables -t mangle -A FORWARD -i ve006 -m connmark -j CONNMARK --set-mark 6
iptables -t mangle -A FORWARD -i ve010 -m connmark -j CONNMARK --set-mark 10
ip rule has a test for fwmark. Thus you create a routing table for ve006 and one for ve010.
ip route add default table ve006 via a.b.c.51 dev ve006
# .51 again, typo?
ip route add default table ve010 via a.b.c.51 dev ve010
ip rule add pref 100 iif ve998 fwmark 6 table ve006
ip rule add pref 101 iif ve998 fwmark 10 table ve010
| How to create an internal multipath gateway |
1,377,294,573,000 |
My ISP still doesn't support IPv6, and I currently use OpenVpn or OpenConnect VPN to get IPv6 connection.
Now that there is a Ubuntu server running VPN client UNDER an OpenWRT router.
Is it possible to share (only) the IPv6 connection from the server to the whole LAN network?
|
Sure, if your IPv6 connection gives you a prefix that is large enough for all the devices in the LAN (i.e. a prefix number smaller than /128, ideally /64 or less).
First, enable IPv6 forwarding by adding this line to /etc/sysctl.conf:
net.ipv6.conf.all.forwarding=1
Then either run sudo sysctl -p or reboot to make it take effect.
This will disable IPv6 autoconfiguration as a side effect, so you may need to re-enable it on your VPN interface, by arranging for these commands to run after your VPN interface has come up:
sysctl -w net.ipv6.conf.<your_VPN_interface_here>.accept_ra=2
sysctl -w net.ipv6.conf.<your_VPN_interface_here>.autoconf=1
If you are getting a /128 prefix (= single host only) on the VPN interface, you'll need to contact your VPN provider and find out how you can get a wider prefix. It might require different VPN settings, or it might require setting up a DHCPv6 client to make a Prefix Request on the VPN-side.
If you can get a wider-than-/64 prefix (= smaller number) on your VPN interface, it will allow you to use SLAAC on your LAN, which is probably the easiest way to configure IPv6.
You'll also want to think about IPv6 firewalling here: set some IPv6 firewall rules in advance to block any incoming connections from the VPN-side you don't specifically need. Think twice before blocking any ICMPv6 types or IPv6 multicasts; both are essential parts of the IPv6 protocol and you can't just completely block them.
Your system is now ready to become an IPv6 router. You should now think about how you split the IPv6 address space allowed to you by the prefix on the VPN interface, and assign sub-prefix(es) of it to your LAN-side interface(s). Since all the IPv6 addresses within your prefix are now yours to use as you see fit, you can just manually assign the IPv6 address for your LAN-side interface(s).
Then install and configure radvd to make your system announce to the LAN that it's an IPv6 router. Modern versions of radvd will also be able to announce the IPv6 DNS servers and the domain suffix the clients should use.
radvd configuration will also include a few settings that tell the clients if they're allowed to just pick an IPv6 address on their own by using stateless address autoconfiguration (SLAAC), or if they're supposed to use DHCPv6. Also, even if you choose to allow SLAAC, you may still make a DHCPv6 server available to provide some extra configuration information to your clients.
Once you've set those bits appropriately and started up radvd to announce the presence of an IPv6 router to your LAN, if your LAN-side sub-prefixes are /64 or wider and you have allowed SLAAC, your clients should now be receiving the radvd announcements and automatically getting themselves a basic IPv6 configuration.
If you ended up with a tighter-than-/64 prefix (=higher number) on your LAN-side, or if you chose to not use SLAAC, you'll now need to set up a DHCPv6 server for your clients.
Then it's time to start testing. If your clients are only showing a link-local IPv6 address that starts with fe80::, you should first verify that they are seeing the router announcements of radvd. That's a fundamental part of IPv6 autoconfiguration; without those announcements, nothing else will happen.
| How to share an IPv6 connection to the whole LAN? |
1,377,294,573,000 |
Machine 62: Ubuntu 16.04, has access to internet, can be accessed via the internet.
On the 62 machine, there is VirtualBox with a VM (also Ubuntu).
I'd like the VM to behave like a 'normal' machine (ip-requests). I reserved a static IP for it, but now I'm not sure how to configure the interfaces of the host and the guest in a way I can ping the guest-machine like I would usually ping the host machine (via the reserved IP instead of 62...).
ReverseProxy worked for a while, but then I needed websockets over ports I don't know in advance. So now my next guess is NAT? ipforwarding? Bridged networks? Masquerading?
|
You've to create a bridged network between the Host and VM. The configuration varies on Hypervisor vendor.
In case you're using Oracle Virtual Box
Open Oracle VM VirtualBox Manager, select the VM and go to network section.
In Adopter 1 tab change the default NAT to Bridged Adopter and chose host's Network Adopter from Name drop-down. Apply it.
Now configure network inside VirtualBox as per your ISP's configuration like static dynamic etc.
If you're using QEMU-KVM
Open Virtual Machine Manager GUI tool (In case you're using GUI)
Select the specific Virtual machine and Open it. Then select the NIC
Chnage default NAT to Host Device xxxxxx: macvtap apply and then configure network inside Virtual Machine as per ISP's configuration.
If you don't have GUI tool then use virsh edit virtual-machine-name and modify the configuration as described previously.
A third alternate is creating iptable NAT rule in host. Then forward all traffic towards that IP (which you want to assign to VM) to VM's internal IP address. You've to write a few iptables line, please google for it. But for me it's the least preferred method.
NOTE-1 Bridged Network and macvtap doesn't work with most of the WiFi adopters in host.
NOTE-2 In case of macvtap your host will be unreachable from VM and vice versa. Both of them will be reachable from outside network. Further, if you've multiple VM they will remain reachable to each other. This is the way macvtap works. To access VM's from host and vice versa create one more NAT adopter in case of Oracle VirtualBox and one more NIC with NAT for QEMU-KVM.
NOTE-3 For both Bridged Network and macvtap the host's network adopter must have an IP address reserved per VM, as the question already mentioned.
| Access VM via static IP (NAT?) |
1,598,462,922,000 |
I need to script some Iptables rule changes involving NAT rules (-t nat) on Ubuntu 16 servers.
It seems like the common way to drop a rule using -D [rule here] does not work with the -t identifier... I really do not want to complicate the scripting by having to identify which rule in my chain I'm looking for and get its associated line number... Any ideas?
In case it helps, the purpose of the below rules is to redirect traffic both localhost and external from 1 server to a backup, during a crash or restart of a local MySQL database (basically).
My Rules:
iptables -t nat -A POSTROUTING -j MASQUERADE
iptables -t nat -A PREROUTING -p tcp --dport 3306 -j DNAT --to-destination RMT_IP:3306
iptables -t nat -I OUTPUT -p tcp -o lo --dport 3306 -j DNAT --to-destination RMT_IP:3306
My Attempt to Drop (Works):
iptables -t nat -D POSTROUTING -j MASQUERADE
iptables -t nat -D PREROUTING -p tcp --dport 3306 -j DNAT --to-destination RMT_IP:3306
Can not figure out how to drop this rule without using --line-number:
iptables -t nat -I OUTPUT -p tcp -o lo --dport 3306 -j DNAT --to-destination RMT_IP:3306
|
Given any rule with -I (insert) or -A (append), you can repeat the rule definition with -D to delete it.
For your particular example, this will delete the first matching rule in the OUTPUT chain for the nat table
iptables -t nat -D OUTPUT -p tcp -o lo --dport 3306 -j DNAT --to-destination RMT_IP:3306
| iptables - Drop NAT rules based on rule/name, NOT rule number |
1,598,462,922,000 |
I observed that MASQUERADE target does not match on packets in the reply direction (in terms of netfilter conntrack).
I've a single simple -t nat -A POSTROUTING -s 10.a.0.0/16 -d 10.b.0.0/16 -j MASQUERADE rule, nothing else besides of ACCEPT policies on all chains, and it seems that
case 1) SYN packets of connection initialization attempts from 10.a/16 network get NAT-ed (this is OK), while
case 2) SYN/ACK packets again from 10.a/16 network (in response to SYN from 10.b/16, ie. the initiator is 10.b/16 in this case) do not get translated, but src address is kept as-is, simply routed.
I'm not sure is it the expected behaviour or i missed something? I mean I dont want it to behave any other way, everything seems working. but the documentation did not confirm to me that this is the factory-default behaviur of the MASQUERADE target.
Could you confirm it? thanks.
|
The identity of a TCP connection is defined by a set of four things:
the IP address of endpoint A
the IP address of endpoint B
the port number of endpoint A
the port number of endpoint B
The TCP protocol standard says that if any of these four things are changed, the packet must not be considered part of the same connection. As a result, it makes no sense to start applying a NAT rule to a SYN/ACK packet if the initial SYN was also not NATted. You must either apply the same kind of NAT mapping for the entire connection from the start to finish, or not NAT it at all; any attempt to add or change a NAT mapping mid-connection will just cause the TCP connection to fail. This is a fundamental fact of the TCP protocol, and the Linux iptables/netfilter code is designed to take it into account.
In your case 2), the SYN/ACK is preceded by a SYN from 10.b/16. That SYN has a source of 10.b/16, so it does not match the MASQUERADE rule and gets routed with addresses kept as-is. Then, if the SYN/ACK from 10.a/16 back to 10.b/16 would be translated, the sender of the original SYN would no longer recognize it as a response to its own SYN, as the source IP + destination IP + source port + destination port combination would be different from what is expected for a valid response.
Essentially, the TCP protocol driver in the system that initiated the connection in 10.b/16 would then be thinking: "Sigh. The 10.a.connection.destination is not answering. And 10.b.NAT.system is bothering me with clearly spurious SYN/ACKs: I'm attempting to connect 10.a.connection.destination, not him. If I have time, I'll send a RST or two to 10.b.NAT.system; hopefully he realizes his mistake and stops bothering me."
| in iptables, does MASQUERADE match only on NEW connections (SYN packets)? |
1,598,462,922,000 |
I have an embedded Linux device tethered to my PC via Ethernet. The PC is able to access a tftp server via VPN, and I am trying to set up iptables rules to allow the embedded device to access the tftp server.
First of all, I used Network Manager to bring up a connection on the eno1 device, and configured the embedded device on the same (private) network. I can successfully access the web configuration page on the device.
Secondly, I added iptables rules on the PC to forward and NAT traffic to and from the VPN, where 192.168.11.0/24 is the private subnet on the local Ethernet, and the remote tftp server is on the 172.16.0.0/16 subnet via tun0:
-A FORWARD -i eno1 -j ACCEPT
-A FORWARD -o eno1 -j ACCEPT
-A POSTROUTING -s 192.168.11.0/24 -d 172.16.0.0/16 -o tun0 -j MASQUERADE
The device can now access http on the remote server via the VPN, but I have failed to find rules to forward and NAT tftp traffic to the same server. I have manually loaded the following kernel modules (I don't know whether this is necessary):
nf_nat_tftp 16384 0
nf_conntrack_tftp 16384 1 nf_nat_tftp
I have tried every combination of -A FORWARD ... RELATED,ESTABLISHED -j ACCEPT that I could think of (and that the internet suggested), but no joy. My limited understanding is that I need some sort of stateful forwarding rule, but what can I use, please?
|
Since Linux 4.7 in 2016, the conntrack automatic helper assignment was disabled by default (after having years of warning about this in kernel messages):
Four years ago we introduced a new sysctl knob to disable automatic
helper assignment in 72110dfaa907 ("netfilter: nf_ct_helper: disable
automatic helper assignment"). This knob kept this behaviour enabled
by default to remain conservative.
This measure was introduced to provide a secure way to configure
iptables and connection tracking helpers through explicit rules.
While it's easy to revert to the non-secure version, I'd rather provide an answer using the new method using iptables (nftables can do this too, with slight differences in the chain priorities) . The way to do this is documented in this blog: Secure use of iptables and connection tracking helpers.
So following the informations there, this should be done on the router doing NAT:
big warning:
The helper module must exist and be able to be auto-loaded before the rule referencing it in the raw table, or the rule addition will fail. This could even prevent an iptables-restore to work correctly and leave a firewall without any rule at boot.
Anyway the NAT part of the module (here nf_nat_tftp) will not be auto-loaded. So better load this module explicitly at system boot anyway as OP did.
assuming this is now the default or has been done:
# sysctl -w net.netfilter.nf_conntrack_helper=0
explicit declaration of helper location, where selectors here mimic those used in OP's POSTROUTING (using the precise and unique IP address of the TFTP server would be better):
iptables -A PREROUTING -t raw -p udp --dport 69 -s 192.168.11.0/24 -d 172.16.0.0/16 -j CT --helper tftp
This rule alone should now have the helper be activated when adequate, thus triggering the mangling of TFTP data and ports, since TFTP is a complex protocol where server replies can come back from unrelated source ports to the dynamic/ephemeral client source port, as seen in this Wikipedia entry for TFTP.
explicitly accept generic established traffic, related traffic to tftp, and initial TFTP queries:
Since you already ACCEPT anything coming from and going to eno1 these rules aren't useful for your current configuration. Should you choose to tighten your firewall rules, here they are. You can choose to make them more or less tight (add interfaces etc.):
iptables -A FORWARD -m conntrack --ctstate ESTABLISHED -j ACCEPT
iptables -A FORWARD -m conntrack --ctstate RELATED -m helper --helper tftp -s 172.16.0.0/16 -d 192.168.11.0/24 -j ACCEPT
iptables -A FORWARD -s 192.168.11.0/24 -d 172.16.0.0/16 -p udp --dport 69 -j ACCEPT
The 2nd rule authorizes those TFTP-specific reverse connections from the server to the client (hence reversed directions) before replies get them ESTABLISHED. The FORWARD chain sees non-NATed traffic, so it doesn't have to know what happened during MASQUERADE (the tftp helper did intervene there though).
| iptables rules to forward tftp via NAT |
1,598,462,922,000 |
Under an Ubuntu 18.04 host, I have set-up an Ubuntu 10.04 guest VM from a cloned HD. The VM fires up fine, I can ssh into it from the host, but it fails to communicate outside of the host.
My question is what is wrong with my configuration below:
Guest VM:
Started with qemu-system-x86_64 G.qcow2 -m 4096 -smp 4 -no-acpi -enable-kvm -name system76 -vga std -device virtio-net,netdev=net0 -netdev tap,id=net0,ifname=tap0,script=no,downscript=no,br=br0vm
Manual network configuration with the GUI / network manager. Static IP 192.168.118.18, mask 255.255.255.0, gw 192.168.118.1, dns 192.168.118.1 (I assumed the tactic I use with my router would work here too, though that may be a mistake).
Somehow ifconfig reports interfaces I did not think were configured. I thought that with the qemu line above we defined net0, but, below, we see eth1 and virbr0!
# ifconfig
eth1 Link encap:Ethernet HWaddr 52:54:00:12:34:56
inet addr:192.168.118.18 Bcast:192.168.118.255 Mask:255.255.255.0
inet6 addr: fe80::5054:ff:fe12:3456/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1203 errors:0 dropped:0 overruns:0 frame:0
TX packets:1606 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:114773 (114.7 KB) TX bytes:290952 (290.9 KB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:313 errors:0 dropped:0 overruns:0 frame:0
TX packets:313 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:45604 (45.6 KB) TX bytes:45604 (45.6 KB)
virbr0 Link encap:Ethernet HWaddr 1e:8f:5c:98:b1:25
inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0
inet6 addr: fe80::1c8f:5cff:fe98:b125/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:84 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:13004 (13.0 KB)
Host configuration:
apt install uml-utilities
apt install bridge-utils
vi /etc/sysctl.conf
net.ipv4.ip_forward = 1
sysctl -p
ip link add name br0vm type bridge
ip addr add 192.168.118.1/24 dev br0vm
ip link set br0vm up
tunctl -t tap0 -u asoundmove
ip link set tap0 up
brctl addif br0vm tap0
mkdir /etc/qemu
vi /etc/qemu/bridge.conf
allow br0vm
# test access to the guest - it works
ssh [email protected]
iptables -t nat -A POSTROUTING -o br0vm -j MASQUERADE
iptables -A FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -o eno1 -i br0vm -j ACCEPT
Trying things out:
I can ping the host from the guest:
ping 192.168.118.1
PING 192.168.118.1 (192.168.118.1) 56(84) bytes of data.
64 bytes from 192.168.118.1: icmp_seq=1 ttl=64 time=0.152 ms
64 bytes from 192.168.118.1: icmp_seq=2 ttl=64 time=0.145 ms
I cannot ping the switch/router to which the host is connected from the guest, the following ping returns nothing:
ping 192.168.117.1
Both ping requests (117.1 & 118.1) show in tcpdump on the guest eth1 and host br0vm, below the tcpdump for the ping 192.168.117.1 requests on the guest.
tcpdump -i br0vm not port ssh
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on br0vm, link-type EN10MB (Ethernet), capture size 262144 bytes
11:57:34.211291 IP 192.168.118.18 > routerlogin.net: ICMP echo request, id 55320, seq 1, length 64
11:57:35.216163 IP 192.168.118.18 > routerlogin.net: ICMP echo request, id 55320, seq 2, length 64
11:57:36.215940 IP 192.168.118.18 > routerlogin.net: ICMP echo request, id 55320, seq 3, length 64
11:57:37.215919 IP 192.168.118.18 > routerlogin.net: ICMP echo request, id 55320, seq 4, length 64
11:57:39.205837 ARP, Request who-has host.hostname tell 192.168.118.18, length 28
11:57:39.205859 ARP, Reply host.hostname is-at 4e:31:b7:14:1b:a9 (oui Unknown), length 28
watch -d -n 1 iptables -nvL
Chain INPUT (policy ACCEPT 2604K packets, 21G bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
50 4200 ACCEPT all -- br0vm eno1 0.0.0.0/0 0.0.0.0/0
Chain OUTPUT (policy ACCEPT 1974K packets, 43G bytes)
pkts bytes target prot opt in out source destination
The 50 4200 counters increment with every ping request
watch -d -n 1 iptables -nvL -t nat
Chain PREROUTING (policy ACCEPT 4283 packets, 510K bytes)
pkts bytes target prot opt in out source destination
Chain INPUT (policy ACCEPT 4175 packets, 503K bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 9155 packets, 663K bytes)
pkts bytes target prot opt in out source destination
Chain POSTROUTING (policy ACCEPT 9227 packets, 666K bytes)
pkts bytes target prot opt in out source destination
25 3789 MASQUERADE all -- * br0vm 0.0.0.0/0 0.0.0.0/0
However the 25 3789 counters do no increase with every ping request.
From the host, this works:
ping 192.168.117.1
PING 192.168.117.1 (192.168.117.1) 56(84) bytes of data.
64 bytes from 192.168.117.1: icmp_seq=1 ttl=64 time=0.609 ms
64 bytes from 192.168.117.1: icmp_seq=2 ttl=64 time=0.585 ms
What am I doing wrong that the IP traffic on the 118 subnet does not get forwarded to the 117 subnet?
EDIT:
Additional information:
ip -br link
lo UNKNOWN 00:00:00:00:00:00 <LOOPBACK,UP,LOWER_UP>
eno1 UP 30:9c:23:9b:eb:df <BROADCAST,MULTICAST,UP,LOWER_UP>
br0vm UP 4e:31:b7:14:1b:a9 <BROADCAST,MULTICAST,UP,LOWER_UP>
tap0 UP 4e:31:b7:14:1b:a9 <BROADCAST,MULTICAST,UP,LOWER_UP>
ip -br address
lo UNKNOWN 127.0.0.1/8 ::1/128
eno1 UP 192.168.117.110/24 fe80::8fae:b4f2:8b90:7601/64
br0vm UP 192.168.118.1/24 fe80::a46d:86ff:fe2a:ddbf/64
tap0 UP fe80::4c31:b7ff:fe14:1ba9/64
ip route
default via 192.168.117.1 dev eno1 proto static metric 100
169.254.0.0/16 dev eno1 scope link metric 1000
192.168.117.0/24 dev eno1 proto kernel scope link src 192.168.117.110 metric 100
192.168.118.0/24 dev br0vm proto kernel scope link src 192.168.118.1
|
The MASQUERADE rule happens at POSTROUTING: that is after a routing decision was already made and a destination interface was already chosen. To communicate with outside, the host will use 192.168.117.1 via eno1. So the MASQUERADE rule criteria should be when using the output interface eno1 rather than br0vm.
You should thus have used:
iptables -t nat -A POSTROUTING -o eno1 -j MASQUERADE
but since this can have unwanted effects (eg: masquerade an other host's IP to its default one etc.) and since there are some side effects of a bridge that might appear later for completely unrelated reasons (described here and there) if you do some specific changes, here's IMHO the best simple rule to use instead:
iptables -t nat -A POSTROUTING -s 192.168.118.0/24 ! -d 192.168.118.0/24 -j MASQUERADE
| nat configuration for qemu-kvm guest and host networks |
1,598,462,922,000 |
I'm experimenting DNS server setup that reply different results based on source IP address. and the same time I need to dynamically change what interface external source ip should forward,
eth0 physical inteface 192.168.1.10
eth0: virtual interface 1 192.168.1.11
eth0:1 virtual interface 2 192.168.1.12
I have bind9 install in my server with two views configured and both listening 192.168.1.11 and 12 respectively.
In my setup only external facing interface is eth0 and all the clients request DNS through it. I need to forward those request to my virtual interface based on my clients source IP address and change it dynamically.
as an example
for scenario 1
if user 192.168.1.40 query DNS through eth0 I need him to forward eth0: (192.168.1.11)
for scenario 2
same user (192.168.1.40) I need to forward to eth0:1 (192.168.1.1)
I want to achieve that external user can get different results by using the same dns server in two different times.
|
Have a look at iproute2.
You can easily configure many route-tables and define network interface that handles the connection, including solution of your problem.
Here you are some useful examples:
http://linux-ip.net/html/routing-tables.html
http://lartc.org/howto/lartc.rpdb.html
References:
http://man7.org/linux/man-pages/man8/ip-rule.8.html
http://man7.org/linux/man-pages/man8/ip.8.html
| Forward traffic to virtual interface based on source IP address dynamically using iptables |
1,598,462,922,000 |
I have 2 CentOS 7 guests running in VirtualBox on a Ubuntu host.
I want to be able to:
Connect using ssh from host to guest
Download/install packages from the Internet on the guest.
I currently have following two virtual network interfaces
Host Only, mapped as 'enp0s3' on guest
NAT, mapped as 'enp0s8' on guest
My current configuration:
$ cat /etc/sysconfig/network-scripts/ifcfg-enp0s3
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=no
NAME=enp0s3
UUID=fcd0aa44-9ab7-42e6-a637-52c429727195
ONBOOT=yes
HWADDR=08:00:27:BE:DB:11
IPADDR=192.168.56.102
PREFIX=32
GATEWAY=192.168.56.1
and
$ cat /etc/sysconfig/network-scripts/ifcfg-enp0s8
HWADDR=08:00:27:A2:03:29
TYPE=Ethernet
BOOTPROTO=dhcp
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=enp0s8
UUID=09acefe7-d513-48f6-b820-0988ac495e5e
ONBOOT=yes
Current route info:
$ route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 192.168.56.1 0.0.0.0 UG 1024 0 0 enp0s3
10.0.3.0 0.0.0.0 255.255.255.0 U 0 0 0 enp0s8
192.168.56.1 0.0.0.0 255.255.255.255 UH 1024 0 0 enp0s3
At this point I can ssh from my Ubuntu host to my CentOS guest but I
cannot successfully connect to the Internet:
$ wget https://github.com/antirez/redis/archive/3.0.0-rc6.tar.gz
Resolving github.com (github.com)... 192.30.252.130
Connecting to github.com (github.com)|192.30.252.130|:443...
Lot of other posts and samples suggested that I remove the default gateway from 'enp0s3'.
If I remove the default gateway from 'enp0s3', I cannot ssh from the Ubuntu host to the CentOS guest.
How can I make this work?
|
So I finally got it to work
I was missing netmask value in the configuration
Host only NIC enp0s3
TYPE=Ethernet
BOOTPROTO=static
NAME=enp0s3
UUID=71d4200e-199d-4d03-935d-6d2e88c41956
DEVICE=enp0s3
ONBOOT=yes
IPADDR=192.168.56.101
NETMASK=255.255.255.0
NAT NIC enp0s8
HWADDR=08:00:27:49:5A:6C
TYPE=Ethernet
BOOTPROTO=dhcp
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=enp0s8
UUID=56cc4f81-d7a4-465a-badb-0b6120a0d62e
ONBOOT=yes
With above values it works the way I need it
ssh from host to guest
Internet access from guest
DB access from host to guest
| Enable ssh host to guest & guest Internet on CentOS 7 guest with VirtualBox |
1,598,462,922,000 |
According to tcpdump, the initial packet from the VPN client gets its source address translated and sent to the destination and the response packet arrives, but this response packet is just lost. I even did firewall-cmd --set-log-denied=all, but this very packet was lost without any log message.
Previously I had my OpenVPN server on CentOS7 without firewalld and enabled internet access for clients like this:
# sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1
# localhost:~ # iptables -t nat -L POSTROUTING -n -v
Chain POSTROUTING (policy ACCEPT 10 packets, 751 bytes)
pkts bytes target prot opt in out source destination
3 180 MASQUERADE all -- * eth0 10.8.1.0/24 0.0.0.0/0
After migrating to OpenSUSE Tumbleweed I spent 4 hours trying to configure the same using firewalld, but gave up, stopped firewalld and tried to use the same iptables commands, but it still doesn't work - the response packet is silently discarded.
10.8.1.1 tun0 # VPN server
172.31.1.100 eth0 # WAN
_
localhost:~ # systemctl stop firewalld
localhost:~ # nft list ruleset
localhost:~ # iptables -t nat -I POSTROUTING -s 10.8.1.0/24 -o eth0 -j MASQUERADE
localhost:~ # nft list ruleset
localhost:~ # iptables-save
# Generated by iptables-save v1.8.7 on Fri Oct 15 02:39:41 2021
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
-A POSTROUTING -s 10.8.1.0/24 -o eth0 -j MASQUERADE
COMMIT
# Completed on Fri Oct 15 02:39:41 2021
# Generated by iptables-save v1.8.7 on Fri Oct 15 02:39:41 2021
*mangle
:PREROUTING ACCEPT [8078:12476730]
:INPUT ACCEPT [7999:12471990]
:FORWARD ACCEPT [29:1740]
:OUTPUT ACCEPT [7524:1618476]
:POSTROUTING ACCEPT [7553:1620216]
COMMIT
# Completed on Fri Oct 15 02:39:41 2021
# Generated by iptables-save v1.8.7 on Fri Oct 15 02:39:41 2021
*raw
:PREROUTING ACCEPT [8078:12476730]
:OUTPUT ACCEPT [7524:1618476]
COMMIT
# Completed on Fri Oct 15 02:39:41 2021
# Generated by iptables-save v1.8.7 on Fri Oct 15 02:39:41 2021
*security
:INPUT ACCEPT [7999:12471990]
:FORWARD ACCEPT [29:1740]
:OUTPUT ACCEPT [7524:1618476]
COMMIT
# Completed on Fri Oct 15 02:39:41 2021
# Generated by iptables-save v1.8.7 on Fri Oct 15 02:39:41 2021
*filter
:INPUT ACCEPT [7999:12471990]
:FORWARD ACCEPT [29:1740]
:OUTPUT ACCEPT [7524:1618476]
COMMIT
# Completed on Fri Oct 15 02:39:41 2021
The client trying to connect to SMTP
localhost:~ # tcpdump -nn -i any "port 465 or icmp"
tcpdump: data link type LINUX_SLL2
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
02:41:25.326501 tun0 In IP 10.8.1.32.37346 > 173.194.222.16.465: Flags [S], seq 3151810436, win 64240, options [mss 1286,sackOK,TS val 1758001736 ecr 0,nop,wscale 7], length 0
02:41:25.326590 eth0 Out IP 172.31.1.100.37346 > 173.194.222.16.465: Flags [S], seq 3151810436, win 64240, options [mss 1286,sackOK,TS val 1758001736 ecr 0,nop,wscale 7], length 0
02:41:25.363047 eth0 In IP 173.194.222.16.465 > 172.31.1.100.37346: Flags [S.], seq 1158840380, ack 3151810437, win 65535, options [mss 1430,sackOK,TS val 4105615202 ecr 1758001736,nop,wscale 8], length 0
02:41:26.280346 tun0 In IP 10.8.1.32.37346 > 173.194.222.16.465: Flags [S], seq 3151810436, win 64240, options [mss 1286,sackOK,TS val 1758002755 ecr 0,nop,wscale 7], length 0
02:41:26.280400 eth0 Out IP 172.31.1.100.37346 > 173.194.222.16.465: Flags [S], seq 3151810436, win 64240, options [mss 1286,sackOK,TS val 1758002755 ecr 0,nop,wscale 7], length 0
02:41:26.316940 eth0 In IP 173.194.222.16.465 > 172.31.1.100.37346: Flags [S.], seq 1158840380, ack 3151810437, win 65535, options [mss 1430,sackOK,TS val 4105616156 ecr 1758001736,nop,wscale 8], length 0
02:41:27.331029 eth0 In IP 173.194.222.16.465 > 172.31.1.100.37346: Flags [S.], seq 1158840380, ack 3151810437, win 65535, options [mss 1430,sackOK,TS val 4105617170 ecr 1758001736,nop,wscale 8], length 0
02:41:28.306349 tun0 In IP 10.8.1.32.37346 > 173.194.222.16.465: Flags [S], seq 3151810436, win 64240, options [mss 1286,sackOK,TS val 1758004782 ecr 0,nop,wscale 7], length 0
02:41:28.306380 eth0 Out IP 172.31.1.100.37346 > 173.194.222.16.465: Flags [S], seq 3151810436, win 64240, options [mss 1286,sackOK,TS val 1758004782 ecr 0,nop,wscale 7], length 0
02:41:28.342862 eth0 In IP 173.194.222.16.465 > 172.31.1.100.37346: Flags [S.], seq 1158840380, ack 3151810437, win 65535, options [mss 1430,sackOK,TS val 4105618182 ecr 1758001736,nop,wscale 8], length 0
02:41:30.403068 eth0 In IP 173.194.222.16.465 > 172.31.1.100.37346: Flags [S.], seq 1158840380, ack 3151810437, win 65535, options [mss 1430,sackOK,TS val 4105620242 ecr 1758001736,nop,wscale 8], length 0
^C
11 packets captured
13 packets received by filter
0 packets dropped by kernel
|
So I decided to reboot, but before reboot I dumped the runtime kernel parameters to a file and afterwards repeated the iptables/sysctl setup and this time it worked!
After comparing sysctl output I see that net.ipv4.conf.eth0.forwarding was 0 even though net.ipv4.ip_forward was 1. I didn't know that forwarding could be enabled or disabled for a single network card. Looks like playing with firewall-cmd set the wrong value for the runtime kernel parameter and firewall-cmd was unable to revert it for some reason.
| MASQUERADE doesn't work - the response packets are lost |
1,598,462,922,000 |
We run a FreePBX server on our LAN and softphones can register using the local SIP server IP.
I need these softphones to be able to register over the internet too so we have configured the firewall and created a dns entry for sip.ourdomain.com.
When the softphones are configured to use sip.ourdomain.com then can register over the internet fine however when they are in the office and are connected to the wifi they are unable to register.
I suspect this is because when in the office they are trying to register to sip.ourdomain.com which resolves to the public IP that redirects to the sip server on the local LAN.
How can this be resolved?
Edit1
LAN is 192.168.1.X/24 & SIP Server is 192.168.1.8
|
What you may need is defining in your infra-structure a split view DNS or multiview DNS architecture.
Thus in your internal network, your internal DNS server will resolve sip.ourdomain.com to 192.168.1.8 and externally to the current public IP address.
Another alternative is enforcing a public IP address for the SIP server instead of a private IP address.
I usually advise network administrators using public IP addresses for SIP servers and VPN servers for not having to deal with some corner cases of NAT problems.
| Unable to register SIP via WiFi |
1,598,462,922,000 |
My general question is this: What's the best way (simplest, easiest, quickest, least error-prone, etc.) to verify iptables NAT rules locally on a single host (i.e. without a network connection) at the command-line?
What follows are the details of specific (failed) attempts at checking a simple DNAT rule using NetCat. I am hoping for a resolution of my specific issue in this case, but also for an answer to my general question.
I'm working on a VirtualBox virtual machine running Debian 8 (Jessie). I want to use netcat to perform a basic test of a simple DNAT rule.
For my test, all I want to do is send some data to one local address (e.g. 192.168.0.1) and have it arrive at another local address (e.g. 192.168.0.2).
I've tried several different approaches so far:
Dummy interfaces and the PREROUTING chain
Virtual interfaces and the PREROUTING chain
Using the OUTPUT chain instead of PREROUTING
Dummy interfaces and the PREROUTING chain
My first attempt was to add a DNAT rule to the PREROUTING chain and add two dummy interfaces with the appropriate addresses.
Here is my rule:
sudo iptables \
-t nat \
-A PREROUTING \
-d 192.168.0.1 \
-j DNAT --to-destination 192.168.0.2
There are no other netfilter rules in my firewall. But just to be sure, here is the output from iptables-save:
# Generated by iptables-save v1.4.21
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
-A PREROUTING -d 192.168.0.1/32 -j DNAT --to-destination 192.168.0.2
COMMIT
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
COMMIT
To reiterate, all I want to do is send some data to the 192.168.0.1 address and have it arrive at the 192.168.0.2 address.
It's probably worth mentioning that the 192.168.0.0/24 subnetwork is unused on my VM. First I add a couple of dummy interfaces:
sudo ip link add dummy1 type dummy
sudo ip link add dummy2 type dummy
Next I assign assign the IP addresses to the dummy interfaces on the desired subnetwork range:
sudo ip addr add 192.168.0.1/24 dev dummy1
sudo ip addr add 192.168.0.2/24 dev dummy2
And then I bring the interfaces up:
sudo ip link set dummy1 up
sudo ip link set dummy2 up
Here is what my routing table looks like now:
default via 10.0.2.2 dev eth0
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15
192.168.0.0/24 dev dummy1 proto kernel scope link src 192.168.0.1
192.168.0.0/24 dev dummy2 proto kernel scope link src 192.168.0.2
192.168.56.0/24 dev eth1 proto kernel scope link src 192.168.56.100
Now I listen at the first (source) address using netcat:
nc -l -p 1234 -s 192.168.0.1
And I connect to the netcat server with a netcat client (in a separate terminal window):
nc 192.168.0.1 1234
Text entered in one window appears in the other - just as expected.
I do the same thing with the second address as well:
nc -l -p 1234 -s 192.168.0.2
nc 192.168.0.2 1234
Again, text entered in one window appears in the other - as expected.
Finally, I try to listen on the target (DNAT) address and connect via the source (DNAT) address:
nc -l -p 1234 -s 192.168.0.2
nc 192.168.0.1 1234
Unfortunately the connection fails with the following error:
(UNKNOWN) [192.168.0.1] 1234 (?) : Connection refused
I also tried using ping -c 1 -R 192.168.0.1 to see if the DNAT was taking effect, but it does not look like that's the case:
PING 192.168.0.1 (192.168.0.1) 56(124) bytes of data.
64 bytes from 192.168.0.1: icmp_seq=1 ttl=64 time=0.047 ms
RR: 192.168.0.1
192.168.0.1
192.168.0.1
192.168.0.1
--- 192.168.0.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms
Why isn't this working? What am I doing wrong?
Diagnosis with tcpdump
To diagnose this issue, I tried using tcpdump to listen for traffic on the dummy interfaces. I tried listening to all interfaces (and filtering out SSH and DNS):.
sudo tcpdump -i any -e port not 22 and port not 53
Then I pinged the dummy1 interface:
ping -n -c 1 -I dummy1 192.168.0.1
This yielded the following results:
listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
In 00:00:00:00:00:00 (oui Ethernet) ethertype IPv4 (0x0800), length 100: 192.168.0.1 > 192.168.0.1: ICMP echo request, id 8071, seq 1, length 64
In 00:00:00:00:00:00 (oui Ethernet) ethertype IPv4 (0x0800), length 100: 192.168.0.1 > 192.168.0.1: ICMP echo reply, id 8071, seq 1, length 64
So it looks like the dummy interfaces are attached to the loopback interface. This might mean that the iptables rules are being totally circumvented.
Virtual interfaces and the PREROUTING chain
As a second attempt, I tried using so-called virtual IP addresses instead of dummy interfaces.
Here is how I added the "virtual" IP addresses to the eth0 and eth1 interfaces:
sudo ip addr add 192.168.0.100/24 dev eth0
sudo ip addr add 192.168.0.101/24 dev eth1
NOTE: I used different IP addresses for these than I did for the dummy interface.
Then I flushed and updated the iptables NAT rules:
sudo iptables -F -t nat
sudo iptables \
-t nat \
-A PREROUTING \
-d 192.168.0.100 \
-j DNAT --to-destination 192.168.0.101
The I retried the ping test:
ping -n -c 1 -R 192.168.0.100
No dice:
PING 192.168.0.100 (192.168.0.100) 56(124) bytes of data.
64 bytes from 192.168.0.100: icmp_seq=1 ttl=64 time=0.023 ms
RR: 192.168.0.100
192.168.0.100
192.168.0.100
192.168.0.100
--- 192.168.0.100 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms
Then the netcat test again. Start the server:
nc -l -p 1234 -s 192.168.0.101
Try to connect the client:
nc 192.168.0.100 1234
Also no dice:
(UNKNOWN) [192.168.0.100] 1234 (?) : Connection refused
Using the OUTPUT chain instead of PREROUTING
Then I tried moving both DNAT rules from the PREROUTING chain to the OUTPUT chain:
sudo iptables -F -t nat
sudo iptables \
-t nat \
-A OUTPUT \
-d 192.168.0.1 \
-j DNAT --to-destination 192.168.0.2
sudo iptables \
-t nat \
-A OUTPUT \
-d 192.168.0.100 \
-j DNAT --to-destination 192.168.0.101
Now I try ping on both the dummy and virtual interfaces:
user@host:~$ ping -c 1 -R 192.168.0.1
PING 192.168.0.1 (192.168.0.1) 56(124) bytes of data.
64 bytes from 192.168.0.1: icmp_seq=1 ttl=64 time=0.061 ms
RR: 192.168.0.1
192.168.0.2
192.168.0.2
192.168.0.1
--- 192.168.0.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms
user@host:~$ ping -c 1 -R 192.168.0.100
PING 192.168.0.100 (192.168.0.100) 56(124) bytes of data.
64 bytes from 192.168.0.100: icmp_seq=1 ttl=64 time=0.058 ms
RR: 192.168.0.100
192.168.0.101
192.168.0.101
192.168.0.100
--- 192.168.0.100 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms
And I also try the netcat client-server test for each pair of IP addresses:
nc -l -p 1234 -s 192.168.0.2
nc 192.168.0.1 1234
and:
nc -l -p 1234 -s 192.168.0.101
nc 192.168.0.100 1234
This test succeeds as well.
So it looks like both the dummy and virtual interfaces work when the DNAT rule is in the OUTPUT chain instead of the PREROUTING chain.
It seems that part of my problem is that I'm unclear on which packets traverse which chains.
|
Short Explanation: The dummy interfaces and virtual IP addresses send packets through the loopback interface, which isn't affected by the PREROUTING chain. By using network namespaces with veth interfaces we can send traffic from one IP address to another in a way that more accurately models multi-host network traffic and allows us to test the DNAT rule on the PREROUTING chain, as desired.
A more detailed description of the solution follows.
Here is a Bash script that configures a pair of network interfaces and tests that the DNAT rule is functioning as expected:
# Create a network namespace to represent a client
sudo ip netns add 'client'
# Create a network namespace to represent a server
sudo ip netns add 'server'
# Create a veth virtual-interface pair
sudo ip link add 'client-eth0' type veth peer name 'server-eth0'
# Assign the interfaces to the namespaces
sudo ip link set 'client-eth0' netns 'client'
sudo ip link set 'server-eth0' netns 'server'
# Change the names of the interfaces (I prefer to use standard interface names)
sudo ip netns exec 'client' ip link set 'client-eth0' name 'eth0'
sudo ip netns exec 'server' ip link set 'server-eth0' name 'eth0'
# Assign an address to each interface
sudo ip netns exec 'client' ip addr add 192.168.1.1/24 dev eth0
sudo ip netns exec 'server' ip addr add 192.168.2.1/24 dev eth0
# Bring up the interfaces (the veth interfaces the loopback interfaces)
sudo ip netns exec 'client' ip link set 'lo' up
sudo ip netns exec 'client' ip link set 'eth0' up
sudo ip netns exec 'server' ip link set 'lo' up
sudo ip netns exec 'server' ip link set 'eth0' up
# Configure routes
sudo ip netns exec 'client' ip route add default via 192.168.1.1 dev eth0
sudo ip netns exec 'server' ip route add default via 192.168.2.1 dev eth0
# Test the connection (in both directions)
sudo ip netns exec 'client' ping -c 1 192.168.2.1
sudo ip netns exec 'server' ping -c 1 192.168.1.1
# Add a DNAT rule to the server namespace
sudo ip netns exec 'server' \
iptables \
-t nat \
-A PREROUTING \
-d 192.168.2.1 \
-j DNAT --to-destination 192.168.2.2
# Add a dummy interface to the server (we need a target for the destination address)
sudo ip netns exec 'server' ip link add dummy type dummy
sudo ip netns exec 'server' ip addr add 192.168.2.2/24 dev dummy
sudo ip netns exec 'server' ip link set 'dummy' up
# Test the DNAT rule using ping
sudo ip netns exec 'client' ping -c 1 -R 192.168.2.1
The output of the ping test shows that the rule is working:
PING 192.168.2.1 (192.168.2.1) 56(124) bytes of data.
64 bytes from 192.168.2.1: icmp_seq=1 ttl=64 time=0.025 ms
RR: 192.168.1.1
192.168.2.2
192.168.2.2
192.168.1.1
--- 192.168.2.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms
Now I can also perform my NetCat test. First I listen on the server:
sudo ip netns exec 'server' nc -l -p 1234 -s 192.168.2.2
And then I connect via the client (in a separate terminal window):
sudo ip netns exec 'client' nc 192.168.2.1 1234
Text entered in one terminal window appears in the other - success!
| Testing iptables DNAT Rule Locally Using NetCat |
1,598,462,922,000 |
I have a Bind9 on a host.
I have several guest virtual machines.
I want my virtual machines to use the Bind9 located on the host.
I know how to make Bind9 accept requests from my vitual machines (listen-on + allow-recursion).
I want to achieve it using iptables/netfilter, without modifing Bind9 configuration (aka listen only on 127.0.0.1).
--> this is just a local port redirection. I know how to do it with socat, but I'm stuck when doing it with iptables/netfilter
Bind listen only on 127.0.0.1, so the packets must originate from 127.0.0.1
The virtual machines are on a bridge vmbr0 10.10.10.0/24
The host is also on the bridge at 10.10.10.1
Should I make the packets enter into a custom chain, then DNAT+SNAT them, or is there a simplier way?
I did that (but does not work):
sysctl -w net.ipv4.conf.vmbr0.route_localnet=1 # not sure if necessary. Let's see that when everything will work
iptables --table nat --new-chain dns-prerouting
iptables --table nat --append PREROUTING --source 10.10.10.0/24 --destination 10.10.10.1 --protocol udp --destination-port 53 --jump dns-prerouting
iptables --table nat --append PREROUTING --source 10.10.10.0/24 --destination 10.10.10.1 --protocol tcp --destination-port 53 --jump dns-prerouting
iptables --table nat --new-chain dns-postrouting
iptables --table nat --append POSTROUTING --source 10.10.10.0/24 --destination 127.0.0.1 --protocol udp --destination-port 53 --jump dns-postrouting
iptables --table nat --append POSTROUTING --source 10.10.10.0/24 --destination 127.0.0.1 --protocol tcp --destination-port 53 --jump dns-postrouting
iptables --table nat --append dns-prerouting --jump DNAT --to-destination 127.0.0.1
iptables --table nat --append dns-postrouting --jump SNAT --to-source 127.0.0.1
|
You have to use sysctl -w net.ipv4.conf.XXX.route_localnet=1 as you did, but probably on the virtual Ethernet interface.
This allow the kernel to keep martin packets.
Also keep in mind that locally generated packets does not pass into the PREROUTING chain. So you have to use the OUTPUT chain.
And finally don't try to NAT for this very special case. Use --jump TPROXY instead.
I can't give you a working example by memory, you have to find the exact setup. Then please complete the answer for future reference.
| Common DNS for virtual machines - with iptables/netfilter |
1,598,462,922,000 |
I am trying to use DNAT on a new custom Linux target, but I get an error with the following basic command:
#iptables -t nat -A PREROUTING -d 10.110.0.250 -p tcp --dport 9090 -j DNAT --to 10.110.0.239:80
$iptables: No chain/target/match by that name.
I think all modules are correctly loaded:
# lsmod | grep ip
ipt_MASQUERADE 1686 1 - Live 0xbf15c000
iptable_nat 2396 1 - Live 0xbf150000
nf_conntrack_ipv4 11354 1 - Live 0xbf149000
nf_defrag_ipv4 1331 1 nf_conntrack_ipv4, Live 0xbf145000
nf_nat_ipv4 3401 1 iptable_nat, Live 0xbf141000
nf_nat 13364 4 ipt_MASQUERADE,xt_nat,iptable_nat,nf_nat_ipv4, Live 0xbf138000
nf_conntrack 72079 6 ipt_MASQUERADE,xt_conntrack,iptable_nat,nf_conntrack_ipv4,nf_nat_ipv4,nf_nat, Live 0xbf11b000
ip_tables 10836 1 iptable_nat, Live 0xbf114000
x_tables 16429 4 ipt_MASQUERADE,xt_conntrack,xt_nat,ip_tables, Live 0xbf10a000
The forwarding is active:
# cat /proc/sys/net/ipv4/ip_forward
1
strace doesn't give me any clue about the problem:
# ...
socket(PF_LOCAL, SOCK_STREAM, 0) = 3
bind(3, {sa_family=AF_LOCAL, sun_path=@"xtables"}, 10) = 0
socket(PF_INET, SOCK_RAW, IPPROTO_RAW) = 4
fcntl64(4, F_SETFD, FD_CLOEXEC) = 0
getsockopt(4, SOL_IP, 0x40 /* IP_??? */, "nat\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., [84]) = 0
getsockopt(4, SOL_IP, 0x41 /* IP_??? */, "nat\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., [992]) = 0
setsockopt(4, SOL_IP, 0x40 /* IP_??? */, "nat\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 1264) = -1 ENOENT (No such file or directory)
close(4) = 0
write(2, "iptables: No chain/target/match "..., 46iptables: No chain/target/match by that name.
) = 46
exit_group(1) = ?
+++ exited with 1 +++
What is going wrong?
[EDIT]
I found that if I remove the destination port the command is working
iptables -t nat -A PREROUTING -d 10.110.0.250 -p tcp -j DNAT --to 10.110.0.239:80
[/EDIT
Thanks.
|
The problem was a missing module XT_TCPUDP
There is the full list of dynamic loaded module for my command :
xt_nat 1527 1 - Live 0xbf12f000
xt_tcpudp 1961 1 - Live 0xbf12b000
iptable_nat 2396 1 - Live 0xbf127000
nf_conntrack_ipv4 11354 1 - Live 0xbf120000
nf_defrag_ipv4 1331 1 nf_conntrack_ipv4, Live 0xbf11c000
nf_nat_ipv4 3401 1 iptable_nat, Live 0xbf118000
nf_nat 13364 3 xt_nat,iptable_nat,nf_nat_ipv4, Live 0xbf10f000
nf_conntrack 72079 4 iptable_nat,nf_conntrack_ipv4,nf_nat_ipv4,nf_nat, Live 0xbf0f2000
ip_tables 10836 1 iptable_nat, Live 0xbf0eb000
x_tables 16429 3 xt_nat,xt_tcpudp,ip_tables, Live 0xbf0e1000
| iptables DNAT: 'No chain/target/match by that name' |
1,598,462,922,000 |
I'd like to rewrite the source IP on TCP/514 traffic leaving a redhat machine, for connections that weren't initiated from the machine.
The machine receives TCP/514 traffic on an interface, for example 10.10.0.20, and then I'd like to return the traffic as though the reply is from 10.10.0.15 (which isn't assigned to the machine).
If I was initiating the connection, then I could use the nat table, and:
iptables -A POSTROUTING -t nat -p tcp --sport 514 -j SNAT --to=10.10.0.15
..but since I'm replying to incoming traffic, I can't make it hit the nat table (as far as I can tell). Ignoring the reasons why I need to do things this way, how can I make this work?
More background:
It's a redhat 7 machine sitting behind a Netscaler VIP which receives
syslog traffic over TCP (not UDP). I'm using client IP passthrough on
the VIP. Due to the firewall seeing return traffic coming from the
syslog server IP, not the VIP's IP, the firewall is dropping the
traffic, and hence I'd like to rewrite TCP replies from the syslog
server so they come from the VIP's IP address. Since the traffic
doesn't originate from the backend server, I don't seem to be able to
use the nat table (and therefore no -j SNAT).
What I see now is:
13:13:45.439683 IP 10.10.0.8.31854 > 10.10.0.20.514: Flags [S], seq 544116376, win 8190, options [mss 1460], length 0
13:13:45.439743 IP 10.10.0.20.514 > 10.10.0.8.31854: Flags [S.], seq 4163333198, ack 544116377, win 14600, options [mss 1460], length 0
What I want to see is:
13:13:45.439683 IP 10.10.0.8.31854 > 10.10.0.20.514: Flags [S], seq 544116376, win 8190, options [mss 1460], length 0
13:13:45.439743 IP 10.10.0.15.514 > 10.10.0.8.31854: Flags [S.], seq 4163333198, ack 544116377, win 14600, options [mss 1460], length 0
|
DSR method
The most efficient way would be to configure Direct Server Return mode correctly on Netscaler, where Netscaler does MAC based forwarding to the syslog server with destination VIP address unchanged (10.10.0.15).
The syslog server also needs to have that VIP address in order to receive packets forwarded from Netscaler. The address can be assigned to any internal interface like lo or dummy0.
ip addr add 10.10.0.15/32 dev lo
And you have to set some sysctls on the incoming interface (here I assume eth0) to avoid problems with ARP for VIP (see 6.7. The Cure: 2.6.x kernels - arp_ignore/arp_announce). Add the following in /etc/sysctl.conf and run sysctl -p.
net.ipv4.conf.eth0.arp_ignore = 1
net.ipv4.conf.eth0.arp_announce = 2
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
Note that it's useless to set arp_ignore / arp_announce on lo.
DNAT method
If Netscaler does DNAT on the incoming traffic, then the syslog server should definitely forward the return traffic to Netscaler as well so that it could free connection tracking resource. That's the most natural way to complete address translation.
In this case you might want to utilize the policy routing on the syslog server. With this you can apply a special routing table to pakcets in the specific condition like "outgoing TCP packets from port 514".
There are HOWTO docs on Linux advanced routing like this. I recommend you to look through the latter mini HOWTO to understand the following instruction.
Linux Advanced Routing & Traffic Control
Linux Advanced Routing Mini HOWTO
First, define the special routing table named VIP with any ID in /etc/iproute2/rt_tables:
1 VIP
Add a default route to VIP (10.10.0.15) to this VIP table:
ip route add default via 10.10.0.15 table VIP
Add an entry to iptables mangle table to mark 1 on outgoing TCP packets from port 514:
iptables -t mangle -A OUTPUT -p tcp --sport 514 -j MARK --set-mark 1
Add a rule to look up VIP routing table on packets with mark 1:
ip rule add from all fwmark 1 table VIP
You can see rules defined so far by ip rule list. Rules are processed in ascending order of priority value (0 is highest precedence).
# ip rule list
0: from all lookup local
32765: from all fwmark 0x1 lookup VIP
32766: from all lookup main
32767: from all lookup default
You can check the content of each routing table like this:
# ip route ls table local
# ip route ls table VIP
# ip route ls table main
| Rewrite source IP in TCP replies using iptables |
1,598,462,922,000 |
I've a Linux box with 3 network adapaters, which I'd like to configure as follow:
Adapter A is connected to computer A
Adapter B is connected to computer B
Adapter C is connected to the Internet. Specifically, to someserver.com
What I want to acheive:
All traffic from A will go to the Internet
Define a special "secret port" on Adapter B
TCP traffic coming from Computer B going to 'someserver.com' on 'secret port' will masquerade its source IP to appear as it is coming from Computer A
TCP traffic coming back from 'someserver.com' going to Computer A to the same port used in #3, will redirect to computer B.
Do I need to implement a router or a bridge? Can I do it merely by configuring NetFitler/ip tables or should it implement some code? If code, at which layer to I integrate with the IP stack?
|
For general access, you'll have to use MASQUARADE / SNAT (depending if your IP address on C is dynamic or static).
So let's say current situation is your computer A has static IP address a.a.a.a, and your computer B has static IP address b.b.b.b. Both have default gateway to computer C. And Someserver.com has static IP address r.r.r.r and secret port is pppp.
You would configure computer C as router, which would have its default route to the Internet interface C (it already does that, either via static configuration, or being dynamically setup via PPPoE, etc). That by itself will accomplish 1.
Now, you have two possibilities:
make computer A router too.
Then you change computer B config so its default route is via computer A (and not computer C as before), and configure computer A like this:
iptables -t nat -s b.b.b.b -d r.r.r.r -p tcp --dport pppp -j SNAT --to a.a.a.a
that would make all TCP packets from source IP b.b.b.b going to destination IP r.r.r.r and destination port pppp pretend like they're coming from a.a.a.a, thus accomplishing 3, and traffic from someserver.com will go back to what was source address (a.a.a.a), which would be decoded by computer A and sent back to computer B. (thus accomplishing 4)
That is easier, but requires that you computer B is running OS that is capable of such NAT policies.
change computer A to have private IP like 10.0.1.100/24 and computer B to have private IP 10.0.2.100/24. Then on computer C do:
ip addr add a.a.a.a/nn dev ifaceC
ip addr add b.b.b.b/nn dev ifaceC
iptables -t nat -s 10.0.1.100 -j SNAT --to a.a.a.a
iptables -t nat -s 10.0.2.100 -d r.r.r.r -p tcp --dport pppp -j SNAT --to a.a.a.a
iptables -t nat -s 10.0.2.100 -j SNAT --to b.b.b.b
where nn is your netmask and ifaceC is name of your interface C. That would put computer A and computer B in private ranges, thus allowing computer C to NAT computer A to a.a.a.a (so it behaves like before), and NAT computer B either to a.a.a.a (if dst=r.r.r.r, dport=pppp condition is met) or to b.b.b.b (otherwise).
This does not require any special support on computer A nor computer B, but puts them behind NAT which might affect some other things.
And of course, it this age it should be mentioned that above will only work for good ol' IPv4 addresses (the last of which are rapidly being used up) and not on IPv6
| Bridge/Router with custom logic |
1,598,462,922,000 |
I have a machine that serves both as a router and a server. I have several lxc containers on this machine, and want to expose them to both the LAN and WAN. Following https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/security_guide/sec-configuring_port_forwarding_using_nftables I was able to successfully access the servers from both WAN and LAN machines, but not localhost/the router-server itself!
Here is the configuration that partially works:
# Created from lxc-net in debian
table inet lxc {
chain input {
type filter hook input priority filter; policy accept;
iifname "lxcbr0" udp dport { 53, 67 } accept
iifname "lxcbr0" tcp dport { 53, 67 } accept
}
chain forward {
type filter hook forward priority filter; policy accept;
iifname "lxcbr0" accept
oifname "lxcbr0" accept
}
}
# Created from lxc-net in debian
table ip lxc {
chain postrouting {
type nat hook postrouting priority srcnat; policy accept;
ip saddr 10.0.3.0/24 ip daddr != 10.0.3.0/24 counter packets 51 bytes 3745 masquerade
}
}
# This is what I added
table ip myportforwarding {
chain prerouting {
type nat hook prerouting priority dstnat; policy accept;
tcp dport 8088 dnat to 10.0.3.230
}
chain postrouting {
type nat hook postrouting priority srcnat; policy accept;
ip daddr 10.0.3.230 masquerade
}
}
I tried several options from this answer: How to configure port forwarding with nftables for a Minecraft server on Raspberry Pi?
Nothing seemed to work to enable local access to the services on 8088.
Looking at wireshark, access from LAN looks like:
192.168.1.105 -> 192.168.1.1 SYN
10.0.3.1 -> 10.0.3.230 SYN
...
Access from the same machine:
192.168.1.1 -> 192.168.1.1 SYN
192.168.1.1 <- 192.168.1.1 FIN!
I'm not too familar with nft or iptables, so I'm sure there is something I'm missing
|
Let's look at a part of the Packet flow in Netfilter and General Networking schematic. It was made for iptables but most of it applies for nftables:
It's documented that the nat table is consulted only for packets in conntrack state NEW: packets starting a new flow.
Routed/forwarded traffic arrives from the nat/prerouting hook: that's where new flows will have a chance to be NAT-ed. OP handled this case.
Locally initiated packets (created at the local process bubble in the center) first traverse the nat/output hook, then their answer will come back as usual through the nat/prerouting hook. Leaving aside the fact that the destination is already not changed for the query, as the answer matches the flow that was created before, it's not a packet in NEW state anymore: the nat/prerouting hook will never be consulted for such traffic because it's too late: the only place to do NAT was nat/output.
So for this case where both routed and locally initiated packets should receive the same alteration, rules in nat/prerouting have to be duplicated in nat/output and usually slightly adapted to match the different case.
The adaptation here is about the host reaching itself, so for the routing case where the interface is the loopback (lo) interface, thus adding oif lo to it. Without this filter, any query from the host to anywhere using port 8088 would be redirected to the container, while only the case for the host to itself is intended.
Adding this chain in the already existing ip myportforwarding table will handle it:
chain output {
type nat hook output priority dstnat; policy accept;
tcp dport 8088 oif "lo" dnat to 10.0.3.230
}
For the little details: a change from nat/output triggers the reroute check part, where the routing stack is told to reconsider the previous routing decision (output interface lo). After reroute check the output interface becomes lxcbr0.
| nft port forwarding not working on router |
1,598,462,922,000 |
I'm running Gentoo Linux, so using plain iptbales to manage my firewall and networking.
I usually use wan0 for all my traffic, but since I have already a web-server behind that i would like wan1 (bind to another domain) for my second web-server.
I have three interfaces:
eth0 = LAN
wan0 = Primary used WAN (default gateway)
wan1 = Secondary WAN
Some Infos about the Gateways
> route -n
Kernel IP Routentabelle
Ziel Router Genmask Flags Metric Ref Use Iface
0.0.0.0 80.108.x.x 0.0.0.0 UG 0 0 0 wan0
192.168.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
80.108.x.0 0.0.0.0 255.255.254.0 U 0 0 0 wan0
84.114.y.0 0.0.0.0 255.255.255.0 U 0 0 0 wan1
127.0.0.0 127.0.0.1 255.0.0.0 UG 0 0 0 lo
The default init for NAT/MASQUEARDING is
sysctl -q -w net.ipv4.conf.all.forwarding=1
iptables -N BLOCK
iptables -A BLOCK -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A BLOCK -m state --state NEW,ESTABLISHED -i eth0 -j ACCEPT
iptables -A BLOCK -m state --state NEW,ESTABLISHED -i lo -j ACCEPT
iptables -t nat -A POSTROUTING -o eth0 -s 192.168.0.0/16 -j MASQUERADE
iptables -t nat -A POSTROUTING -o wan0 -j MASQUERADE
iptables -t nat -A POSTROUTING -o wan1 -j MASQUERADE
Behind this gateway I'm running several web-servers. On one machine I'm running a HTTP Server on Port 8000 instead of 80. Usually when I'm using wan0 as the incoming interface I use the following rules:
lan_host1="192.168.0.200"
iptables -A FORWARD -i wan0 -p TCP -d $lan_host1--dport 80 -j ACCEPT
iptables -t nat -A PREROUTING -i wan0 -p TCP --dport 8000 -j DNAT --to-destination "$lan_host1":80
iptables -A FORWARD -i wan0 -p UDP -d $lan_host1--dport 80 -j ACCEPT
iptables -t nat -A PREROUTING -i wan0 -p UDP --dport 8000 -j DNAT --to-destination "$lan_host1":80
That works fine.
Now I would like wan1 to be used, as wan0 is tied to an IP/domain I usually use for something else.
I thought a simple change to wan1 would do it.
lan_host1="192.168.0.200"
iptables -A FORWARD -i wan1 -p TCP -d $lan_host1--dport 80 -j ACCEPT
iptables -t nat -A PREROUTING -i wan1 -p TCP --dport 8000 -j DNAT --to-destination "$lan_host1":80
iptables -A FORWARD -i wan1 -p UDP -d $lan_host1--dport 80 -j ACCEPT
iptables -t nat -A PREROUTING -i wan1 -p UDP --dport 8000 -j DNAT --to-destination "$lan_host1":80
But that doesn't work. I guess the issue is that wan0 is the default GW. So I guess packages received by wan1 are forwarded to lan_host1, but when being send back to the gateway they are send through wan0 instead of wan1 or at least using the ip from wan0.
Any suggestions how I could manage this?
Thanks in advance,
Rob
|
As the answer is tied to the configuration, I make some assumptions. You'll have to adapt the answer to fit the actual configuration.
wan1's LAN and gateway for wan1 arbitrarily chosen as 84.114.7.0/24 and 84.114.7.254.
no consideration of firewall rules made, but all this shouldn't interact with them.
On Linux ip link, ip address and ip route should always be used instead of deprecated ifconfig and route. route probably can't handle additional routing tables anyway.
Just as a reminder, iptables or actually netfilter, doesn't route, but it can by its actions alter routing decisions made by the IP routing stack. This schematic shows where routing decisions can happen. For routed (rather than locally originated) traffic that's only in one place and alterations must happen before: raw/PREROUTING, mangle/PREROUTING or nat/PREROUTING, with raw often impractical, and nat only for limited cases, mostly leaving mangle.
A basic multi-homed system, to use multiple paths to internet, usually requires policy routing, where the route can change not only with the destination as usual, but also with the source or with other selectors (as will be done here) used in policy rules. On Linux additional rules made with ip rule can select a different routing table to select for example a different default route (there will still be only one default route, but one per routing table).
So here the principle, while still keeping active Strict Reverse Path Forwarding (rp_filter), is to accept packets coming from wan1 and route them as usual toward eth0 using an alternate table (which will allow to pass rp_filter). This additional routing table should duplicate the main routing table, but using only routes needed for the alternate path (wan1) and thus not including the usual routes with the "normal" path (wan0). If other routes (such as VPNs etc.) have to be involved in flows going through wan1, chances are that their route too have to be added, or other additional rules and tables have to be created to cope with that.
Since Linux discontinued the use of a routing cache in kernel 3.6, nothing in the routing stack would tell to send back reply packets from host1 to the client through wan1 and they would end up going out using the main default route through wan0, NATed with the wrong IP for this interface (netfilter is route-agnostic and had already chosen the NAT to be done when receiving the first packet of the connection) and probably dropped by the next router of the ISP also doing Strict Reverse Path Filtering. There's a netfilter feature allowing to copy a packet's mark in the conntrack's mark and put it back in the packet: this will act as route memory for the connection. So iptables and netfilter's conntrack will be used for two related features: to mark the packet in order to alter the routing decision, and to restore this mark on the reply packets identified as part of the same connection.
All this translates to these commands:
routing part
Use for marked packets (arbitrary mark value 101) an extra routing table (unrelated arbitrary value also 101) :
ip rule add fwmark 101 lookup 101
Populate the table with entries similar the main routing table, minus wan0 entries:
ip route add table 101 192.168.0.0/16 dev eth0
ip route add table 101 84.114.7.0/24 dev wan1
ip route add table 101 default via 84.114.7.254 dev wan1
iptables/netfilter part
There are various optimizations possible in the following commands, It can probably be improved.
Restore a potential previous mark already saved, so reply packets will get the same mark as original packets:
iptables -t mangle -A PREROUTING -j CONNMARK --restore-mark
Mark packets arriving from wan1 to alter routing decision above:
iptables -t mangle -A PREROUTING -i wan1 -j MARK --set-mark 101
If there's a mark, save it in conntrack (could have been done in the nat table to do it only once per connection flow rather than for every packet):
iptables -t mangle -A PREROUTING -m mark ! --mark 0 -j CONNMARK --save-mark
Actually this will still fail a Strict Reverse Path Forwarding check, since this undocumented feature was added in 2010. It must be used here:
sysctl -w net.ipv4.conf.wan1.src_valid_mark=1
| iptables: MultiWAN and portforwarding & port redirection |
1,598,462,922,000 |
I have been facing an unusual problem with my VirtualBox NAT settings. I have scoured the Unix SE Forums but was unable to find a similar issue reported. I did find one but it was from 2009 and related to Corporate Proxy.
Details :
Windows 7 Host running VirtualBox 5.1.0
Multiple Guest OS - Ubuntu, Fedora, CentOS (All Fresh Installations)
Home Network, No Perimeter Firewall
Using NAT (Intel PRO/1000 MT Desktop Adapter)
From the Guest I am able to Ping external FQDDNs. Which means DNS and Ping is working
Issue : Unable to Browse any Website
Sometime ago I had played a little with the 'VBoxManage modifyvm' settings in order to solve a Bridging related issue. I think I may have messed up something which is causing my new issue.
I have tried to re-install VirtualBox but it looks like the previous settings are getting saved somewhere which I am unable to remove and unable to 'Reset to Default' the VBox settings.
Troubleshooting Done :
Changed Adapter to PCnet Fast3
Tried Changing IPs, DNS
IP : 192.168.10.15
Default Gateway: 192.168.10.2
DNS: 192.168.10.3
tcpdump Captures :
While Pinging to Yahoo.com (Getting Replies) :
root@localhost anish]# tcpdump -i enp0s3
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp0s3, link-type EN10MB (Ethernet), capture size 65535 bytes
19:28:20.650985 IP 192.168.10.15.45804 > 192.168.10.3.domain: 1534+ A? yahoo.com. (27)
19:28:20.653043 IP 192.168.10.15.35280 > 192.168.10.3.domain: 47054+ PTR? 3.10.168.192.in-addr.arpa. (43)
19:28:20.661250 IP 192.168.10.3.domain > 192.168.10.15.45804: 1534 3/0/0 A 98.138.253.109, A 98.139.183.24, A 206.190.36.45 (75)
19:28:20.661833 IP 192.168.10.15 > ir1.fp.vip.ne1.yahoo.com: ICMP echo request, id 3431, seq 1, length 64
19:28:20.942937 IP ir1.fp.vip.ne1.yahoo.com > 192.168.10.15: ICMP echo reply, id 3431, seq 1, length 64
When trying to Browse through Firefox (Gateway sends Net Unreachable ICMP) :
19:29:31.562448 IP 192.168.10.15.38893 > 192.168.10.3.domain: 32749+ A? www.google.com. (32)
19:29:31.562562 IP 192.168.10.15.38893 > 192.168.10.3.domain: 48885+ AAAA? www.google.com. (32)
19:29:31.670159 IP 192.168.10.15.52571 > 192.168.10.3.domain: 60173+ A? www.google.com. (32)
19:29:31.670261 IP 192.168.10.15.52571 > 192.168.10.3.domain: 14907+ AAAA? www.google.com. (32)
19:29:35.937594 IP 192.168.10.3.domain > 192.168.10.15.55703: 53252 NXDomain 0/0/0 (43)
19:29:35.937995 IP 192.168.10.3.domain > 192.168.10.15.38893: 32749 1/0/0 A 216.58.196.196 (48)
19:29:35.938025 IP 192.168.10.3.domain > 192.168.10.15.38893: 48885 NotImp 0/0/0 (32)
19:29:35.938371 IP 192.168.10.3.domain > 192.168.10.15.52571: 60173 1/0/0 A 216.58.196.196 (48)
19:29:35.938408 IP 192.168.10.3.domain > 192.168.10.15.52571: 14907 NotImp 0/0/0 (32)
19:29:35.938865 IP 192.168.10.15.33663 > 192.168.10.3.domain: 46127+ PTR? 15.10.168.192.in-addr.arpa. (44)
19:29:35.940003 IP 192.168.10.15.46468 > kul06s14-in-f4.1e100.net.http: Flags [S], seq 4014962253, win 14600, options [mss 1460,sackOK,TS val 1649927 ecr 0,nop,wscale 7], length 0
19:29:35.941228 IP 192.168.10.2 > 192.168.10.15: ICMP net kul06s14-in-f4.1e100.net unreachable, length 36
19:29:35.941377 IP 192.168.10.15.46469 > kul06s14-in-f4.1e100.net.http: Flags [S], seq 613107971, win 14600, options [mss 1460,sackOK,TS val 1649928 ecr 0,nop,wscale 7], length 0
19:29:35.941857 IP 192.168.10.15.46470 > kul06s14-in-f4.1e100.net.http: Flags [S], seq 717756838, win 14600, options [mss 1460,sackOK,TS val 1649929 ecr 0,nop,wscale 7], length 0
19:29:35.942613 IP 192.168.10.2 > 192.168.10.15: ICMP net kul06s14-in-f4.1e100.net unreachable, length 36
Could someone please guide me in resolving this ? Alternatively, is there any way to take VBox to a 'Default Settings' state ?
Any help would be greatly appreciated, as I have been only troubleshooting VBox related issues since the past couple of days rather than doing any actual work on my Guest VM's.
|
I was finally able to resolve this issue. Actually, its just a temporary fix as of now till I figure out more.
It looks like VBox settings and VM settings are fine, it is my Windows Host network configuration causing the issue.
I checked further and found that even though I was getting ICMP and DNS resolution, my TCP traffic was not working.
I found it by trying to :
root@KaliOrc:~# telnet google.com 80
Trying 216.58.199.174...
telnet: Unable to connect to remote host: Network is unreachable
Then on my Windows Host I did a :
netsh winsock reset
Which apparently resolved the issue ! I was able to Browse and test everything as working.
root@KaliOrc:~# telnet google.com 80
Trying 216.58.199.174...
Connected to google.com.
Escape character is '^]'.
Now every time I reboot my Host, I have to reset winsock for my VM's to be able to browse.
If anyone can shed some light on the root cause it would be great because I will be able to work out a permanent solution.
Anyone having similar problems for which the above 'fix' is not working, please go through the below ticket on VBox which has a ton of information :
https://www.virtualbox.org/ticket/13292
| VirtualBox NAT Issue - Able to Ping and Lookup. Unable to Browse |
1,598,462,922,000 |
Im using a private IP address and I want to keep alive SNAT entries in my Router (Gateway) for at least two hours (some windows apps of my network are using TCP keepalive set to 2 hours). Gateway is a Linux machine so i set nf_conntrack_tcp_timeout_established and nf_conntrack_generic_timeout values to 7400 seconds:
echo 7400 > /proc/sys/net/netfilter/nf_conntrack_tcp_timeout_established
echo 7400 > /proc/sys/net/netfilter/nf_conntrack_generic_timeout
Now, when a TCP connection is stablished shortly after i can see the new value:
# cat /proc/net/ip_conntrack
tcp 6 7399 ESTABLISHED src=192.168.0.192 dst=108.168.176.194
sport=51826 dport=5222 src=108.168.176.194 dst=95.63.14.117 sport=5222
dport=51826 [ASSURED] use=1
But few seconds later i read the value again and now the value has returned to 60 seconds:
tcp 6 39 ESTABLISHED src=192.168.0.192 dst=108.168.176.194
sport=51826 dport=5222 src=108.168.176.194 dst=95.63.14.117 sport=5222
dport=51826 [ASSURED] use=1
|
The root cause was that the conntrack code in the kernel has been modified. As we are using a embedded linux distribution modified by our provider, the function that refresh the timeout for the SNAT entry was pointing to a special function used for provide one of the our provider new 'features'. I have fixed it and now it is working as expected.
| Conntrack TCP Timeout for state stablished not working |
1,626,675,809,000 |
I have Debian 4.19.194-1 as a router server with LAN, WAN, PPPOE (as gateway) and COMPUTER1 in LAN network which should have access to internet through Debian router.
As firewall I use nftables with rules:
#!/usr/sbin/nft -f
flush ruleset
define EXTIF = "ppp0"
define LANIF = "enp1s0"
define WANIF = "enp4s0"
define LOCALIF = "lo"
table firewall {
chain input {
type filter hook input priority 0
ct state {established, related} counter accept
ct state invalid counter drop
ip protocol icmp counter accept
ip protocol igmp counter accept comment "Accept IGMP"
ip protocol gre counter accept comment "Accept GRE"
iifname { $LOCALIF, $LANIF } counter accept
tcp dport 44122 counter accept
udp dport 11897 counter accept
udp dport 1194 counter accept
udp dport {67,68} counter accept comment "DHCP"
counter reject
}
chain forwarding {
type filter hook forward priority 0
# teleguide.info for ntf monitor
ip daddr 46.29.166.30 meta nftrace set 1 counter accept
ip saddr 46.29.166.30 meta nftrace set 1 counter accept
udp dport 1194 counter accept
tcp dport 5938 counter accept
udp dport 5938 counter accept
ip daddr 10.10.0.0/24 counter accept
ip saddr 10.10.0.0/24 counter accept
ip protocol gre counter accept comment "Accept GRE Forward"
counter drop comment "all non described Forward drop"
}
chain outgoing {
type filter hook output priority 0
oifname $LOCALIF counter accept
}
}
table nat {
chain prerouting {
type nat hook prerouting priority 0
iifname $EXTIF udp dport 1194 counter dnat to 10.10.0.4
}
chain postrouting {
type nat hook postrouting priority 0
ip saddr 10.10.0.0/24 oifname $EXTIF counter masquerade
}
}
lsmod:
tun 53248 2
pppoe 20480 2
pppox 16384 1 pppoe
ppp_generic 45056 6 pppox,pppoe
slhc 20480 1 ppp_generic
binfmt_misc 20480 1
i915 1736704 0
ppdev 20480 0
evdev 28672 2
video 49152 1 i915
drm_kms_helper 208896 1 i915
iTCO_wdt 16384 0
iTCO_vendor_support 16384 1 iTCO_wdt
parport_pc 32768 0
coretemp 16384 0
sg 36864 0
serio_raw 16384 0
pcspkr 16384 0
drm 495616 3 drm_kms_helper,i915
parport 57344 2 parport_pc,ppdev
i2c_algo_bit 16384 1 i915
rng_core 16384 0
button 20480 0
nft_masq_ipv4 16384 3
nft_masq 16384 1 nft_masq_ipv4
nft_reject_ipv4 16384 1
nf_reject_ipv4 16384 1 nft_reject_ipv4
nft_reject 16384 1 nft_reject_ipv4
nft_counter 16384 25
nft_ct 20480 2
nft_connlimit 16384 0
nf_conncount 20480 1 nft_connlimit
nf_tables_set 32768 3
nft_tunnel 16384 0
nft_chain_nat_ipv4 16384 2
nf_nat_ipv4 16384 2 nft_chain_nat_ipv4,nft_masq_ipv4
nft_nat 16384 1
nf_tables 143360 112 nft_reject_ipv4,nft_ct,nft_nat,nft_chain_nat_ipv4,nft_tunnel,nft_counter,nft_masq,nft_connlimit,nft_masq_ipv4,nf_tables_set,nft_reject
nf_nat 36864 2 nft_nat,nf_nat_ipv4
nfnetlink 16384 1 nf_tables
nf_conntrack 172032 8 nf_nat,nft_ct,nft_nat,nf_nat_ipv4,nft_masq,nf_conncount,nft_connlimit,nft_masq_ipv4
nf_defrag_ipv6 20480 1 nf_conntrack
nf_defrag_ipv4 16384 1 nf_conntrack
ip_tables 28672 0
x_tables 45056 1 ip_tables
autofs4 49152 2
ext4 745472 2
crc16 16384 1 ext4
mbcache 16384 1 ext4
jbd2 122880 1 ext4
fscrypto 32768 1 ext4
ecb 16384 0
crypto_simd 16384 0
cryptd 28672 1 crypto_simd
glue_helper 16384 0
aes_x86_64 20480 1
raid10 57344 0
raid456 172032 0
async_raid6_recov 20480 1 raid456
async_memcpy 16384 2 raid456,async_raid6_recov
async_pq 16384 2 raid456,async_raid6_recov
async_xor 16384 3 async_pq,raid456,async_raid6_recov
async_tx 16384 5 async_pq,async_memcpy,async_xor,raid456,async_raid6_recov
xor 24576 1 async_xor
raid6_pq 122880 3 async_pq,raid456,async_raid6_recov
libcrc32c 16384 3 nf_conntrack,nf_nat,raid456
crc32c_generic 16384 5
raid0 20480 0
multipath 16384 0
linear 16384 0
raid1 45056 2
md_mod 167936 8 raid1,raid10,raid0,linear,raid456,multipath
sd_mod 61440 6
ata_generic 16384 0
ata_piix 36864 4
libata 270336 2 ata_piix,ata_generic
psmouse 172032 0
scsi_mod 249856 3 sd_mod,libata,sg
ehci_pci 16384 0
i2c_i801 28672 0
uhci_hcd 49152 0
lpc_ich 28672 0
ehci_hcd 94208 1 ehci_pci
mfd_core 16384 1 lpc_ich
usbcore 299008 3 ehci_pci,ehci_hcd,uhci_hcd
r8169 90112 0
realtek 20480 2
libphy 77824 2 r8169,realtek
usb_common 16384 1 usbcore
ntf monitor trace(verdict accept everywhere):
trace id 2c2a8923 ip firewall forwarding packet: iif "enp1s0" oif "ppp0" ether saddr xxx ether daddr xxx ip saddr 10.10.0.96 ip daddr 46.29.166.30 ip dscp cs0 ip ecn not-ect ip ttl 127 ip id 32611 ip length 52 tcp sport 62489 tcp dport https tcp flags == syn tcp window 8192
trace id 2c2a8923 ip firewall forwarding rule ip daddr 46.29.166.30 nftrace set 1 counter packets 0 bytes 0 accept (verdict accept)
trace id 2c2a8923 ip nat postrouting packet: oif "ppp0" @ll,xxx ip saddr 10.10.0.96 ip daddr 46.29.166.30 ip dscp cs0 ip ecn not-ect ip ttl 127 ip id 32611 ip length 52 tcp sport 62489 tcp dport https tcp flags == syn tcp window 8192
trace id 2c2a8923 ip nat postrouting rule ip saddr 10.10.0.0/24 oifname "ppp0" counter packets 0 bytes 0 masquerade (verdict accept)
trace id 73f8f405 ip firewall forwarding packet: iif "ppp0" oif "enp1s0" ip saddr 46.29.166.30 ip daddr 10.10.0.96 ip dscp af32 ip ecn not-ect ip ttl 58 ip id 0 ip length 52 tcp sport https tcp dport 62489 tcp flags == 0x12 tcp window 29200
trace id 73f8f405 ip firewall forwarding rule ip saddr 46.29.166.30 nftrace set 1 counter packets 0 bytes 0 accept (verdict accept)
trace id ca8ec4f5 ip firewall forwarding packet: iif "enp1s0" oif "ppp0" ether saddr xxx ether daddr xxx ip saddr 10.10.0.96 ip daddr 46.29.166.30 ip dscp cs0 ip ecn not-ect ip ttl 127 ip id 32612 ip length 40 tcp sport 62489 tcp dport https tcp flags == ack tcp window 256
And I don't know why, but some sites work fine from COMPUTER1, but some not with such rules.
For example: https://google.com works well from server and from computer1, but https://teleguide.info works well from server(wget), but not works from computer1.
Any idea whats wrong?
|
The firewall rules did not cause the problem. Instead, it's due to the MTU difference in "plain" Ethernet and PPPoE. Since PPP header takes up (at least) 8 bytes, and the usual MTU of Ethernet itself is 1500 bytes, the MTU of PPPoE in that case will be at most 1492 bytes.
I don't know MTU stuff well enough to tell the details, but as far as I know, if the TCP SYN packet advertise MSS to be larger than what can fit into the MTU of the interface that the replies will come in through, the replying traffic could end up having trouble from actually getting in.
AFAIK, the reason it works fine with the router/server itself is that, the MSS is derived from the MTU of its outbound interface (ppp0), while on the other hand, COMPUTER1's outbound interface is plain Ethernet.
For TCP traffics, one can workaround the problem by having a rule in a hook forwarding chain:
tcp flags syn tcp option maxseg size set 1452
1452 comes from 1500 - 8 - 40, where the 40 is the size of a IPv4 header. For IPv6 you may need 1500 - 8 - 60 = 1432.
You might need to have the rule ordered before any accept rules. (It could depend on the whole structure of the ruleset though, I think.)
P.S. Not sure if you need any measure for UDP traffics.
Alternatively, you can probably just set the MTU of the Ethernet interfaces of all the LAN "clients" of this "router" (and that of its LANIF) to 1492. It's probably a less "workaround" approach, but could be quite a hassle.
| Router with nftables doesn't work well |
1,626,675,809,000 |
I have a frustrating issue with virtualbox where I am unable to ssh into the virtual machine.
I am using debian on both the host as well as the guest.
ip addr on host
3: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:26:b6:f5:f3:26 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.9/24 brd 10.0.0.255 scope global dynamic wlan0
valid_lft 48769sec preferred_lft 48769sec
inet6 fe80::a16d:a4e:4251:d1d9/64 scope link
valid_lft forever preferred_lft forever
10: vboxnet0: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000
link/ether 0a:00:27:00:00:00 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.77/24 brd 10.0.0.255 scope global vboxnet0
valid_lft forever preferred_lft forever
Guest ip addr:
i tried to do an nmap and arp-scan but I do not see anything listening and i'm very confused. Can anyone help me?
|
Please check these;
1. VM has a bridged network? then check subnet mask.
Two host have different networks and gateway.
2. VM has a NAT? then you have to configure Port forwarding rule.
Refer this link:
| Networking issue with Virtualbox |
1,626,675,809,000 |
I have sshd running on port 8000 running on a freshly installed plain vanilla Linux Mint 17.2 Rafaela.
$ sudo netstat -tnlp | grep :8000
tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN 839/sshd
tcp6 0 0 :::8000 :::* LISTEN 839/sshd
$
I can ssh from my PC to itself on localhost. Same for ssh -p 8000 127.0.0.1.
$ ssh -p 8000 localhost
The authenticity of host '[localhost]:8000 ([127.0.0.1]:8000)' can't be established.
ECDSA key fingerprint is 0d:bb:dd:87:b2:4a:72:3a:97:de:7d:2d:fe:52:05:6d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '[localhost]:8000' (ECDSA) to the list of known hosts.
mudd@localhost's password:
I have port 8000 forwarded on my router to my PC. I verified this using SSH server connectivity test. It was able to connect to my PC and retrieve the sshd fingerprint.
Connected to myhost.duckdns.org:8000
Server fingerprint is 2EA4035592EF0D0BE8527A6849BE42D5
This was confirmed by the following log message in /var/log/auth.log.
Sep 5 18:47:21 desktop sshd[4442]: Received disconnect from 50.116.26.68: 11: PECL/ssh2 (http://pecl.php.net/packages/ssh2) [preauth]
But I can't connect if I use the same host name and port from my PC. There are no log messages when connection is refused.
$ ssh -vvv -p 8000 myhost.duckdns.org
OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug2: ssh_connect: needpriv 0
debug1: Connecting to myhost.duckdns.org [111.222.333.444] port 8000.
debug1: connect to address 111.222.333.444 port 8000: Connection refused
ssh: connect to host myhost.duckdns.org port 8000: Connection refused
$
I am not running the ufw firewall.
$ sudo ufw status
Status: inactive
$
Here are the non-comment lines from my ssh_conf:
Port 8000
Protocol 2
HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_dsa_key
HostKey /etc/ssh/ssh_host_ecdsa_key
HostKey /etc/ssh/ssh_host_ed25519_key
UsePrivilegeSeparation yes
KeyRegenerationInterval 3600
ServerKeyBits 1024
SyslogFacility AUTH
LogLevel INFO
LoginGraceTime 120
PermitRootLogin without-password
StrictModes yes
RSAAuthentication yes
PubkeyAuthentication yes
IgnoreRhosts yes
RhostsRSAAuthentication no
HostbasedAuthentication no
PermitEmptyPasswords no
ChallengeResponseAuthentication no
X11Forwarding yes
X11DisplayOffset 10
PrintMotd no
PrintLastLog yes
TCPKeepAlive yes
AcceptEnv LANG LC_*
Subsystem sftp /usr/lib/openssh/sftp-server
UsePAM yes
Here are the non-comment lines from my ssh_conf:
Host *
SendEnv LANG LC_*
HashKnownHosts yes
GSSAPIAuthentication yes
GSSAPIDelegateCredentials no
I ran sudo tcpdump port 8000 and got the following when testing from SSH server connectivity test.
20:34:25.412135 IP li477-68.members.linode.com.50538 > 192.168.10.10.8000: Flags [S], seq 569792316, win 29200, options [mss 1460,sackOK,TS val 522115066 ecr 0,nop,wscale 7], length 0
20:34:25.412181 IP 192.168.10.10.8000 > li477-68.members.linode.com.50538: Flags [S.], seq 1436050940, ack 569792317, win 28960, options [mss 1460,sackOK,TS val 3115491 ecr 522115066,nop,wscale 7], length 0
20:34:25.464245 IP li477-68.members.linode.com.50538 > 192.168.10.10.8000: Flags [.], ack 1, win 229, options [nop,nop,TS val 522115082 ecr 3115491], length 0
20:34:25.464893 IP li477-68.members.linode.com.50538 > 192.168.10.10.8000: Flags [P.], seq 1:28, ack 1, win 229, options [nop,nop,TS val 522115082 ecr 3115491], length 27
20:34:25.464938 IP 192.168.10.10.8000 > li477-68.members.linode.com.50538: Flags [.], ack 28, win 227, options [nop,nop,TS val 3115504 ecr 522115082], length 0
20:34:25.488193 IP 192.168.10.10.8000 > li477-68.members.linode.com.50538: Flags [P.], seq 1:44, ack 28, win 227, options [nop,nop,TS val 3115510 ecr 522115082], length 43
20:34:25.489932 IP 192.168.10.10.8000 > li477-68.members.linode.com.50538: Flags [.], seq 44:1492, ack 28, win 227, options [nop,nop,TS val 3115511 ecr 522115082], length 1448
20:34:25.541411 IP li477-68.members.linode.com.50538 > 192.168.10.10.8000: Flags [.], ack 44, win 229, options [nop,nop,TS val 522115105 ecr 3115510], length 0
20:34:25.541481 IP 192.168.10.10.8000 > li477-68.members.linode.com.50538: Flags [P.], seq 1492:1692, ack 28, win 227, options [nop,nop,TS val 3115523 ecr 522115105], length 200
20:34:25.545375 IP li477-68.members.linode.com.50538 > 192.168.10.10.8000: Flags [P.], seq 28:676, ack 44, win 229, options [nop,nop,TS val 522115105 ecr 3115510], length 648
20:34:25.581765 IP 192.168.10.10.8000 > li477-68.members.linode.com.50538: Flags [.], ack 676, win 237, options [nop,nop,TS val 3115534 ecr 522115105], length 0
20:34:25.596528 IP li477-68.members.linode.com.50538 > 192.168.10.10.8000: Flags [.], ack 1692, win 274, options [nop,nop,TS val 522115122 ecr 3115511], length 0
20:34:25.635013 IP li477-68.members.linode.com.50538 > 192.168.10.10.8000: Flags [P.], seq 676:948, ack 1692, win 274, options [nop,nop,TS val 522115133 ecr 3115534], length 272
20:34:25.635043 IP 192.168.10.10.8000 > li477-68.members.linode.com.50538: Flags [.], ack 948, win 247, options [nop,nop,TS val 3115547 ecr 522115133], length 0
20:34:25.652925 IP 192.168.10.10.8000 > li477-68.members.linode.com.50538: Flags [P.], seq 1692:2540, ack 948, win 247, options [nop,nop,TS val 3115551 ecr 522115133], length 848
20:34:25.722014 IP li477-68.members.linode.com.50538 > 192.168.10.10.8000: Flags [P.], seq 948:964, ack 2540, win 296, options [nop,nop,TS val 522115159 ecr 3115551], length 16
20:34:25.761772 IP 192.168.10.10.8000 > li477-68.members.linode.com.50538: Flags [.], ack 964, win 247, options [nop,nop,TS val 3115579 ecr 522115159], length 0
20:34:25.814129 IP li477-68.members.linode.com.50538 > 192.168.10.10.8000: Flags [P.], seq 964:1016, ack 2540, win 296, options [nop,nop,TS val 522115187 ecr 3115579], length 52
20:34:25.814202 IP 192.168.10.10.8000 > li477-68.members.linode.com.50538: Flags [.], ack 1016, win 247, options [nop,nop,TS val 3115592 ecr 522115187], length 0
20:34:25.814396 IP 192.168.10.10.8000 > li477-68.members.linode.com.50538: Flags [P.], seq 2540:2592, ack 1016, win 247, options [nop,nop,TS val 3115592 ecr 522115187], length 52
20:34:25.868770 IP li477-68.members.linode.com.50538 > 192.168.10.10.8000: Flags [P.], seq 1016:1116, ack 2592, win 296, options [nop,nop,TS val 522115203 ecr 3115592], length 100
20:34:25.869212 IP li477-68.members.linode.com.50538 > 192.168.10.10.8000: Flags [F.], seq 1116, ack 2592, win 296, options [nop,nop,TS val 522115203 ecr 3115592], length 0
20:34:25.870699 IP 192.168.10.10.8000 > li477-68.members.linode.com.50538: Flags [F.], seq 2592, ack 1117, win 247, options [nop,nop,TS val 3115606 ecr 522115203], length 0
20:34:25.922969 IP li477-68.members.linode.com.50538 > 192.168.10.10.8000: Flags [.], ack 2593, win 296, options [nop,nop,TS val 522115220 ecr 3115606], length 0
This is all I get when running ssh -vvv -p 8000 myhost.duckdns.org.
20:36:38.940822 IP 192.168.10.10.35369 > fl-71-53-144-158.dhcp.embarqhsd.net.8000: Flags [S], seq 1068206726, win 29200, options [mss 1460,sackOK,TS val 3148873 ecr 0,nop,wscale 7], length 0
20:36:38.941219 IP fl-71-53-144-158.dhcp.embarqhsd.net.8000 > 192.168.10.10.35369: Flags [R.], seq 0, ack 1068206727, win 0, length 0
Any suggestions??
|
You won't be able to connect to the NAT'ed server name from the host which is the destination of the NAT. The reason is very simple NAT breaks TCP/IP in this case. Just walk through what's happening at the TCP level and you will understand why it's not supposed to work:
from 192.168.10.10 you send a SYN packet to myhost.duckdns.org (an external IP address)
this request goes through you router and gets NAT'ed to 192.168.10.10:8000
192.168.10.10:8000 receives the request with the original source IP of 192.168.10.10 (since the router that did the NAT rewrote the destination only)
192.168.10.10:8000 replies back to 192.168.10.10 (the requestor)
The requestor will ignore the response from 192.168.10.10:8000 since it was expecting a response from myhost.duckdns.org (an external IP address)
This is the reason behind such a behaviour in a nutshell. One of the possible solutions would be to define a masquerading rule on your router to ensure that if somebody from the internal network tries to communicate with the NAT'ed port they would go through the router in both directions. Another option would be to define myhost.duckdns.org with 127.0.0.1 in your local /etc/hosts.
| ssh connection refused from same PC when going outside local network |
1,626,675,809,000 |
I am using older SOHO router wl500gP (it is v1 but I think this is not important) with custom Oleg firmware. My topology looks like this:
192.168.3.3 192.168.3.2 NAT 192.168.2.1 192.168.2.170 (DHCP)
PC1<--------------------------->(WAN)wl500gP(LAN)<-------------------------------->PC2
According to web interface I have NAT enabled on my router (bellow are some outputs you that you can verify it, since with custom firmware the router is Linux box).
Now my discovery: I can access LAN interface of my router from PC1, but I cannot access PC2 from PC1. I am not sure if accessing (even part of) internal network from outside world is normal NAT behavior. Shouldn't be all the addresses which are translated hidden behind NAT? As I am aware I have not set virtual servers, port forwarding, DMZs etc. Here are my experiments:
# PC1:
└──> ping 192.168.2.1
connect: Network is unreachable
└──> sudo route add -net 192.168.2.0 netmask 255.255.255.0 dev eth0
└──> ping 192.168.2.1
PING 192.168.2.1 (192.168.2.1) 56(84) bytes of data.
64 bytes from 192.168.2.1: icmp_req=1 ttl=64 time=0.873 ms
64 bytes from 192.168.2.1: icmp_req=2 ttl=64 time=0.405 ms
64 bytes from 192.168.2.1: icmp_req=3 ttl=64 time=0.415 ms
64 bytes from 192.168.2.1: icmp_req=4 ttl=64 time=0.399 ms
^C
--- 192.168.2.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2998ms
rtt min/avg/max/mdev = 0.399/0.523/0.873/0.202 ms
└──> ping 192.168.2.170
PING 192.168.2.170 (192.168.2.170) 56(84) bytes of data.
From 192.168.3.3 icmp_seq=1 Destination Host Unreachable
From 192.168.3.3 icmp_seq=2 Destination Host Unreachable
From 192.168.3.3 icmp_seq=3 Destination Host Unreachable
From 192.168.3.3 icmp_seq=4 Destination Host Unreachable
From 192.168.3.3 icmp_seq=5 Destination Host Unreachable
From 192.168.3.3 icmp_seq=6 Destination Host Unreachable
# PC2:
# Both pings to 192.168.3.2 and 192.168.3.3 are working. Also simple communication with 192.168.3.3 using netcat is possible.
Here is also a quiet complex list of (I hope) most important outputs from my router. Those are mainly default values (I've changed IP addresses and some other stuff but I hope nothing related to NAT, routing, bridging, forwarding etc. that could cause the mentioned NAT behavior) If somebody will explain the IP Tables and IP Tables NAT section I will be thankful. What I noticed is that dnsmasq daemon is running so it looks like NAT DNS is active.
wl500gP:
Interfaces
##########
br0 Link encap:Ethernet HWaddr 00:1B:FC:6B:81:02
inet addr:192.168.2.1 Bcast:192.168.2.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:203 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:77202 (75.3 KiB)
eth0 Link encap:Ethernet HWaddr 00:1B:FC:6B:81:02
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:2561 errors:0 dropped:0 overruns:0 frame:0
TX packets:3101 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:264691 (258.4 KiB) TX bytes:2594967 (2.4 MiB)
Interrupt:4 Base address:0x1000
eth1 Link encap:Ethernet HWaddr 00:1B:FC:6B:81:02
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:14
TX packets:203 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:78826 (76.9 KiB)
Interrupt:12 Base address:0x2000
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MULTICAST MTU:16436 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
vlan0 Link encap:Ethernet HWaddr 00:1B:FC:6B:81:02
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:203 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:78014 (76.1 KiB)
vlan1 Link encap:Ethernet HWaddr 00:1B:FC:6B:81:02
inet addr:192.168.3.2 Bcast:192.168.3.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:2561 errors:0 dropped:0 overruns:0 frame:0
TX packets:2898 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:218593 (213.4 KiB) TX bytes:2516953 (2.3 MiB)
Routing Table
#############
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.168.3.3 0.0.0.0 255.255.255.255 UH 0 0 0 vlan1
192.168.3.0 0.0.0.0 255.255.255.0 U 0 0 0 vlan1
192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 br0
127.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 lo
0.0.0.0 192.168.3.3 0.0.0.0 UG 0 0 0 vlan1
IP Tables
#########
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 state INVALID
2366 197K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 state NEW
170 63047 ACCEPT all -- br0 * 0.0.0.0/0 0.0.0.0/0 state NEW
0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:21 flags:0x17/0x02
188 11280 ACCEPT tcp -- * * 0.0.0.0/0 192.168.2.1 tcp dpt:80
4 336 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpts:33434:33534
0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT all -- br0 br0 0.0.0.0/0 0.0.0.0/0
0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 state INVALID
0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
0 0 DROP all -- !br0 vlan1 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 ctstate DNAT
0 0 DROP all -- * br0 0.0.0.0/0 0.0.0.0/0
Chain OUTPUT (policy ACCEPT 3069 packets, 2528K bytes)
pkts bytes target prot opt in out source destination
Chain BRUTE (0 references)
pkts bytes target prot opt in out source destination
Chain MACS (0 references)
pkts bytes target prot opt in out source destination
Chain SECURITY (0 references)
pkts bytes target prot opt in out source destination
0 0 RETURN tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp flags:0x17/0x02 limit: avg 1/sec burst 5
0 0 RETURN tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp flags:0x17/0x04 limit: avg 1/sec burst 5
0 0 RETURN udp -- * * 0.0.0.0/0 0.0.0.0/0 limit: avg 5/sec burst 5
0 0 RETURN icmp -- * * 0.0.0.0/0 0.0.0.0/0 limit: avg 5/sec burst 5
0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0
Chain logaccept (0 references)
pkts bytes target prot opt in out source destination
0 0 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 state NEW LOG flags 7 level 4 prefix `ACCEPT '
0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0
Chain logdrop (0 references)
pkts bytes target prot opt in out source destination
0 0 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 state NEW LOG flags 7 level 4 prefix `DROP '
0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0
IP Tables NAT
#############
Chain PREROUTING (policy ACCEPT 4 packets, 336 bytes)
pkts bytes target prot opt in out source destination
189 11340 VSERVER all -- * * 0.0.0.0/0 192.168.3.2
Chain POSTROUTING (policy ACCEPT 13 packets, 4303 bytes)
pkts bytes target prot opt in out source destination
0 0 MASQUERADE all -- * vlan1 !192.168.3.2 0.0.0.0/0
0 0 MASQUERADE all -- * br0 192.168.2.0/24 192.168.2.0/24
Chain OUTPUT (policy ACCEPT 13 packets, 4303 bytes)
pkts bytes target prot opt in out source destination
Chain VSERVER (1 references)
pkts bytes target prot opt in out source destination
189 11340 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8080 to:192.168.2.1:80
Process List
############
PID USER VSZ STAT COMMAND
1 admin 1484 S /sbin/init
2 admin 0 SW [keventd]
3 admin 0 SWN [ksoftirqd_CPU0]
4 admin 0 SW [kswapd]
5 admin 0 SW [bdflush]
6 admin 0 SW [kupdated]
7 admin 0 SW [mtdblockd]
54 admin 1484 S syslogd -m 0 -O /tmp/syslog.log -S -D -l 7 -b 1
58 admin 1480 S klogd
59 admin 1480 S telnetd
64 admin 1120 S httpd vlan1
70 nobody 852 S dnsmasq
73 admin 964 S lld2d br0 eth1
74 admin 0 SW [khubd]
83 admin 656 S p9100d -f /dev/usb/lp0 0
85 admin 1480 S rcamdmain
99 admin 1480 S watchdog
101 admin 1040 S upnp -D -L br0 -W vlan1
150 admin 1484 S sh -c /tmp/../usr/sbin/sysinfo > /tmp/sysinfo.htm
151 admin 1480 S /bin/sh /tmp/../usr/sbin/sysinfo
167 admin 1480 R ps
brctl show
##########
bridge name bridge id STP enabled interfaces
br0 8000.001bfc6b8102 no vlan0
eth1
|
That's normal, and has nothing to do with NAT. Linux, by default, treats IP addresses as belonging to the machine¹, not to a particular interface. So it'll answer packets to 192.168.2.1 on any interface, not just the LAN interface.
That said, NAT does not imply a firewall, or vice versa. You can, for example, map internal hosts 192.168.0.2–254 to public IPs X.Y.Z.2–254, and have all traffic heading to X.Y.Z.253 be forwarded to 192.168.0.253. That's still NAT.
With mapping an entire subnet to one external IP address, as a side effect you get firewall-like behavior, making connections basically outgoing-only. But even so, your firewall rules should still block those packets—the NAT code can probably be tricked to map a port you don't want it to, the firewall won't. And its more flexible, too. (And if your ISP gets compromised, they could send you traffic to your private LAN addresses, and your machine would happily forward it).
PS: To see the NAT rules, you want iptables -t nat -L. To see what is allowed to forward through your router/firewall, you need to look at the FORWARD chain, not the INPUT chain.
Footnotes
¹ Or more precisely, a particular network namespace on the machine—there is likely only one unless you're using certain virtual server techniques. The arp_ignore and arp_announce files in /proc/net/ipv4/conf/*/ configure this behavior.
| Does NAT allow accessing internal network (or router's local interface) from outisde by default? |
1,626,675,809,000 |
I want to configure a NAT behavior different from the default one implemented by iptables.
In this example:
iptables -t nat -A POSTROUTING -o eth1 -j SNAT --to-source 193.49.142.107:2000-400
The default behavior of NAT implemented by iptables is Endpoint independent.
This means, all sessions initiated from the same host will have the same 'external' (IP, Port Number) even if there's a range of ports.
I need to know what are the flags or options to be modified in order to have a different port number for each session.
|
SNAT accepts a --random option (from iptables-extensions manpage):
--random
If option --random is used then port mapping will be randomized
(kernel >= 2.6.21).
So I'd try something like:
iptables -t nat -A POSTROUTING -o eth1 -j SNAT --to-source 193.49.142.107:2000-4000 --random
| Randomize external port when doing NAT with iptables |
1,626,675,809,000 |
Since I'm using a transparent proxy service, I use a raspberry pi as my home router. Its OS is plain Raspbian. Now I'm setting up a Minecraft server on 192.168.2.28, and am exposing it to WAN using NAT. Here's my /etc/nftables.conf:
#!/sbin/nft -f
flush ruleset
table ip filter {
chain output {
type filter hook output priority 0; policy accept;
tcp sport 25565 drop
}
}
table ip nat {
chain prerouting {
type nat hook prerouting priority 0; policy accept;
tcp dport 25565 dnat 192.168.2.28
}
chain postrouting {
type nat hook postrouting priority 0; policy accept;
tcp sport 25565 ip saddr 192.168.2.28 masquerade
}
}
However, I have the following issue:
On 192.168.2.28, I run
nc -l -p 25565
On 192.168.2.27, I run
echo "Hello, world!" | nc wan_ip 25565
The wanted behavior is that I get the "Hello, world!" message on 192.168.2.28.
However, when the first SYN packet goes through the router, it only has its daddr NATed, keeping its saddr equal to 192.168.2.27.
When 192.168.2.28 receives the packet, it replies to 192.168.2.27. Since they are in the same L2 network, the packet doesn't go through the router, hence not NATed.
Then 192.168.2.27 receives the packet from 192.168.2.28, but it doesn't know this is the reply from wan_ip.
How can I fix this issue and make port forwarding work everywhere, including from LAN hosts?
|
As a reminder, nftables (just like iptables) sees only the first packet of a flow to be NAT-ed. When it comes to doing NAT, every other packet in the same flow will be handled directly by Netfilter/conntrack without seeing nftables anymore: so the return traffic is automatically un-NAT-ed without further assistance.
The only part that matters is what happens to the very first packet (so for TCP it's a SYN packet). Further traffic for this flow, including reply packets is automatically handled and bypasses NAT hooks.
That means there should be no special postrouting rule to handle traffic emitted from 192.168.2.28. There should be only a generic rule to masquerade all of 192.168.2.0/24 when communicating towards Internet. Anyway this rule is not a problem in itself: it just won't be used as often as one think when the emitted traffic is reply traffic: it's part of a pre-existing flow, so as written above, such packets won't traverse nftables anymore.
What is important is to have proper NAT hairpinning support: the case where the client is in the same LAN as the server and asymmetric traffic would happen without proper care.
Here:
when the router sees anything (coming from anywhere) destined to tcp destination port 25565, redirect the destination address to 192.168.2.28
As the redirection doesn't discriminate from outside and from inside, this rule should have been enough on its own when associated with an adequate generic masquerade rule for the whole LAN. But OP didn't use such rule.
Instead of generic masquerade in postrouting,
OP used tcp sport 25565 ip saddr 192.168.2.28 which has no relation to the previous rule's filter part for the same first packet tcp dport 25565 (and which has also received new destination 192.168.2.28): NAT hairpinning is not achieved.
What should be done instead is either one of:
use a generic masquerading for the whole LAN (adapted to NAT hairpinning use case and lack of additional information)
chain postrouting {
type nat hook postrouting priority 0; policy accept;
ip saddr 192.168.2.0/24 ip daddr != 192.168.2.0/24 masquerade
}
use masquerading only when the server is the destination, for packets with a destination rather source port 25565
chain postrouting {
type nat hook postrouting priority 0; policy accept;
tcp dport 25565 ip daddr 192.168.2.28 masquerade
}
That means only traffic to the server, including LAN-to-LAN for proper NAT hairpinning is masqueraded (LAN systems won't be able to reach Internet).
Instead of 2. choose to apply masquerade only to traffic that first got DNAT
chain postrouting {
type nat hook postrouting priority 0; policy accept;
ct status dnat masquerade
}
Same effect as 2. but more generic.
combine 1 and 3 to masquerade only when needed
Bullets 2. or 3. have the side effect of hiding the Internet source address, so better apply that only for the LAN:
chain postrouting {
type nat hook postrouting priority 0; policy accept;
ip saddr 192.168.2.0/24 ip daddr != 192.168.2.0/24 ct status dnat masquerade
}
So either use 1. or 4. (to restrict LAN access to Internet) There are other choices possible.
Note: There's a corner case for 1. : if the client uses the router's internal IP address as target it won't work without an additional rule for this case, but I doubt the client will do this, and I don't have enough information from OP: the router's internal IP address.
| How to configure port forwarding with nftables for a Minecraft server on Raspberry Pi? |
1,626,675,809,000 |
I am currently setting up my iot router and ran into some issues regarding connection speed. The router itself is a cascading router. My Internet speed is 100Mbit which I verified by directly connecting to the main router via speed test. However, if I connect it via my cascading router, I only get a connection speed of between 10-18Mbit. I think either Kernel IP forwarding or Iptables NAT is likely misconfigured.
The Operating system is Debian 8 Kernel Version 3.4 (Bananian Linux)
The Router itself is a Banana PI BPI R1
Iptables is running version v1.4.21
The relevant commands I ran to set up my network are
iptables -P FORWARD ACCEPT
iptables -t nat -A POSTROUTING -o eth0.101 -j MASQUERADE
(eth0.101 is the output interface that is connected to the main router.)
ip forwarding is enabled via systemctl
ipv6 is completely disabled
since the router’s network card uses an internal switch, I have to use vlans to separate the "lan" from the "wan" I achieve this via the tool swconfig
swconfig dev eth0 set reset 1
swconfig dev eth0 set enable_vlan 1
swconfig dev eth0 vlan 101 set ports '3 8t'
swconfig dev eth0 vlan 102 set ports '4 0 1 2 8t'
swconfig dev eth0 set apply 1
Why do I think this is the NAT/Forwarding?
My first thought was, well my network card is not capable of higher speeds even though it says it is. However to confirm this I ran a socks5 proxy on my router and disabled IP forwarding for the test, when running a speedtest via this socks5 proxy I was able to achieve the 100Mbit which makes me conclude that it is not my network card that is bottlenecking this.
I have tried a few things, including increasing the sizes of packet queues for my VLAN interfaces as they were zero, this changed nothing.
I also do not think that the CPU of my router is too weak to run this, because why would it be strong enough to work with a generic socks5 proxy and too weak to work with iptables?
Here is an output of ifconfig:
eth0 Link encap:Ethernet HWaddr 02:07:0b:02:15:ac
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:370503 errors:0 dropped:0 overruns:0 frame:0
TX packets:365330 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:310436570 (296.0 MiB) TX bytes:308685327 (294.3 MiB)
Interrupt:117 Base address:0xc000
eth0.101 Link encap:Ethernet HWaddr 02:07:0b:02:15:ac
inet addr:192.168.178.2 Bcast:192.168.178.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:209032 errors:0 dropped:0 overruns:0 frame:0
TX packets:171418 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:203959632 (194.5 MiB) TX bytes:102579119 (97.8 MiB)
eth0.102 Link encap:Ethernet HWaddr 02:07:0b:02:15:ac
inet addr:10.8.0.1 Bcast:10.8.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:161471 errors:0 dropped:0 overruns:0 frame:0
TX packets:193912 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:99807884 (95.1 MiB) TX bytes:204644888 (195.1 MiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
as well as /etc/network/interfaces:
auto lo
iface lo inet loopback
auto eth0.101
iface eth0.101 inet static
address 192.168.178.2
network 192.168.178.0
netmask 255.255.255.0
gateway 192.168.178.1
nameserver 8.8.8.8
auto eth0.102
iface eth0.102 inet static
address 10.8.0.1
network 10.8.0.0
netmask 255.255.255.0
Any ideas would be very much appreciated.
|
The Banana R1/Lamobo R1 while an interesting piece of hardware has too many shortcomings.
Firstly, the "switch" internal interface bandwidth is shared. A theoretical 1GBps tops for all shared 5 ports; the official speed people been able to get from it per interface is around 300Mbit.
Second, it has to be setup for that, in the device tree (overlays?) by the OS being used - cannot remember the specific details. Or otherwise it will be slow.
Bananian Linux is an ugly hack, does not work well, it will give you problems, and may not setup well your gigabit switch. Furthermore, Bananian is officially a deprecated project since the end of the 1st quarter of 2017, and security updates for it will stop appearing in a couple of months.
I used the R1 with Armbian for a while; it worked well. I also cut physically the realtek wifi from it, it only created instability even when not being used.
You might also got power issues with mechanical hard disks; I used an SSD.
As a recommendation, stop using Bananian, and try ArmBian. Beware the switch interface is different in Armbian as it uses a more recent kernel 4.x.
Lastly, do not even waste time trying the OpenWRT version for R1. It is a botched job and is full of hacks for working around the big firewall of China.
Leaving now R1 specific considerations, and going to the routing side, one optimisation that can be done in most consumer-grade ISP routers is setting up a port with bridging and connecting your R1 there. Thus, your outside interface will get a public IP address, and your NAT won't have again a double NAT from the ISP. (I am doing the same here)
PS For readers coming here. The R1/R1S is not worth your time and money, get instead an AP router that can be hacked with OpenWRT.
| Iptables NAT / Kernel IP forwarding limited to ~10Mbit |
1,626,675,809,000 |
How would I go about using ssh on Mac OS X to remote into a computer that's overseas? I would like to help my brother with his computer, but he lives in the UK now (and I'm pretty sure he uses a VPN to maintain connection to US based streaming media services). I've done it on my LAN, but never to a remote server...much less one that's overseas.
I tried the basic ssh [email protected], but I'm sure there are PLENTY of reasons why a random IPv4 address won't work :D
Do we need to configure his side?
|
Well you would need him to port forward port 22 from his computer and have him be running sshd if you want to do it the simple way.
On the other hand, you can do the dirty work and have him do a port forward through ssh to your computer. This would be done by you setting up an ssh daemon (instructions for OSX here), then port forwarding port 22 (ssh port) from your main computer through your router. There are many, many variations between each router and firmware, so I really can't walk you through this step. However, this site seems to have a very large database of guides tailored for different routers, so you might have some luck there. Also make sure you set a static private IP address for your main computer so that dhcp doesn't give you a new address. If this happens, your external port will be forwarded to a non-existent internal host, so it would be pointless.
Next, create a user for your brother. Nothing fancy here, just make sure it has a password and that he knows what it is. Also make sure that he has an ssh daemon running on his computer. He may also be interested in creating a user on his computer for you so that he need not expose his password to you and give you your own home directory.
Once you have an ssh daemon running, have your brother connect to your computer with the command ssh -R 2222:localhost:22 [email protected] he should be able to connect and enter his password (which doesn't show on UNIX based OSes while you enter it for security reasons). Once he's connected, traffic from your computer on port 2222 will be forwarded to his computer's port 22 (the ssh port). You should now be able to connect to his computer with ssh -p 2222 you@localhost. Have fun!
As a closing remark, you may also want to pick up a hostname from a DDNS site, I suggest no-ip.com. This way, you can easily connect to your router, which will be at something.ddns.net, or something along those lines. I like no-ip because it's free, and if you configure your router to use it correctly, it will automatically update the hostname to point to your public IP address.
| Trying to SSH overseas [closed] |
1,626,675,809,000 |
I have a centos 6.5 installed on Virtual Box. I'm trying to configure a basic ftp server. Below are details of it.
Installed vsftpd from DVD/Yum: Status -> Successful
Disabled firewall for test purposes,
chkconfig iptables off : successful
service iptables stop : successful
Also set selinux in disabled mode: successful
Added a rule in VirtualBox for port forwarding over NAT:
Rule Protocol Host Port Guest Port
Ftp tcp. 2121. 21
Now when I try to connect with local user or anonymous, it gives a number of errors, every time a different error.
I'm also adding log messages shown by Filezila on my Host machine.
-> LocalUserLogFromClient
Status: Disconnected from server
Status: Resolving address of localhost
Status: Connecting to [::1]:2121...
Status: Connection attempt failed with "ECONNREFUSED - Connection refused by
server", trying next address.
Status: Connecting to 127.0.0.1:2121...
Status: Connection established, waiting for welcome message...
Response: 220 Welcome to C6G FTP service.
Command: AUTH TLS
Response: 530 Please login with USER and PASS.
Command: AUTH SSL
Response: 530 Please login with USER and PASS.
Status: Insecure server, it does not support FTP over TLS.
Command: USER FUser1
Response: 331 Please specify the password.
Command: PASS *****
Error: Connection timed out after 20 seconds of inactivity
Error: Could not connect to server
-> AnonymousUserLog
Status: Resolving address of localhost
Status: Connecting to [::1]:2121...
Status: Connection attempt failed with "ECONNREFUSED - Connection refused by
server", trying next address.
Status: Connecting to 127.0.0.1:2121...
Status: Connection established, waiting for welcome message...
Status: Insecure server, it does not support FTP over TLS.
Status: Connected
Status: Retrieving directory listing...
Command: PWD
Response: 257 "/"
Command: TYPE I
Response: 200 Switching to Binary mode.
Command: PASV
Response: 227 Entering Passive Mode (10,0,2,15,88,204).
Command: LIST
Error: The data connection could not be established: 10065
Error: Connection timed out after 20 seconds of inactivity
Error: Failed to retrieve directory listing
|
An FTP server behind a port forwarder can’t serve in the passive mode unless the software used has a special design to pass FTP data connections through. Cases are:
The NAT software able to sniff for 227 Entering Passive Mode in FTP control connection, to do port forwarding accordingly and some mangling of FTP control data.
The FTP server sending passive mode with visible (NAT) IP address and the NAT forwarding given range of ports.
Thanks to @derobert for pointing out a flaw in my original reasoning.
Linux kernel NAT is supplied with an (optional) FTP support. But if Linux is the guest, then its NAT has nothing to with the problem. The host system’s NAT must do NAT for the guest, indeed. Has the Windows’ NAT software in question such capability? Dunno, and it’s not a Linux question anyway. Recommended solution: replace NAT with bridging for the virtual machine.
| FTP Troubleshooting |
1,626,675,809,000 |
I'm trying to run docker inside of Ubuntu 22.04.3 LTS running in WSL-2 on my Windows 10 machine.
I have followed the instructions here. But it's still not working, I am getting the following error when I run sudo dockerd :
failed to register "bridge" driver: failed to create NAT chain DOCKER:
iptables failed: iptables -t nat -N DOCKER:
iptables v1.8.7 (legacy): can't initialize iptables table `nat':
Table does not exist (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.
(exit status 3)
When I do modprobe ip_tables I get :
modprobe: FATAL: Module ip_tables not found in directory /lib/modules/4.4.0-22621-Microsoft
|
I followed this post, which says to launch ubuntu.exe as administrator and that will fix the issue.
What this does is reinstall a distro, and iptables worked there. Still didn't work on the first distro I installed previously. But it works fine on this newly installed distro so I considered it fixed.
| Launching docker daemon in Ubuntu 22.04 LTS on WSL-2 fails because of iptables |
1,626,675,809,000 |
I am trying to debug an application which sends RPCs via HTTPS. In order to read the actual RPC content, I am trying to use SSLSplit on the same machine as the application to MITM the connection. To that end, I set up a rule in my iptables NAT table which routes all traffic not coming from a root application through 127.0.0.1:8443:
iptables -t nat -A OUTPUT -p tcp -m owner ! --uid-owner root --dport 443 -j REDIRECT --to-port 8443
Subsequently, I run sslsplit -D -k key.pem -c cert.pem -P https 127.0.0.1 8443 as root to prevent outbound traffic from SSLSplit (to the destination server) from being redirected back to SSLSplit. Nevertheless, I get Error 24 on listener: Too many open files, which according to https://github.com/droe/sslsplit/issues/93#issuecomment-96894847 can be attributed to too many sockets being opened. This can be a symptom of traffic sent by SSLSplit being looped back into SSLSplit.
I am at a loss as to what I am doing wrong given that SSLSplit is running as root and root traffic is not affected by the redirect rule. Furthermore, I checked if my iptables rule is correct by executing curl twice, once as root and once as non-root. As expected, the non-root curl doesn't work (curl: (7) Failed to connect to unix.stackexchange.com port 443: Connection refused) while the root curl works perfectly (==> not affected by the iptables rule).
Questions:
Given that SSLSplit is running as root, how can my iptables rule create a loop that causes traffic sent from SSLSplit to be fed back into itself?
How can I fix this to finally be able to read the communication?
An excerpt from the output I get when trying to visit https://unix.stackexchange.com in my browser with SSLSplit running:
SNI peek: [www.gravatar.com] [complete]
Connecting to [192.0.73.2]:443
<repeated 96 times>
SNI peek: [platform-lookaside.fbsbx.com] [complete]
SNI peek: [www.gravatar.com] [complete]
Connecting to [157.240.17.15]:443
Connecting to [192.0.73.2]:443
<repeated some more times>
SNI peek: [www.gravatar.com] [complete]
Connecting to [192.0.73.2]:443
<repeated 95 times>
SNI peek: [platform-lookaside.fbsbx.com] [complete]
SNI peek: [www.gravatar.com] [complete]
Connecting to [157.240.17.15]:443
Connecting to [192.0.73.2]:443
<repeated some more times>
Error 24 on listener: Too many open files
Main event loop stopped (reason=0).
Child pid 12445 exited with status 0
|
Reading OP's very same linked comment:
Add an input interface so that only inbound connections are sent to
sslsplit, e.g. if your LAN facing interface is eth0 and your WAN
facing interface is eth1,
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -j REDIRECT --to-port 8080
which isn't what was done and thus couldn't warn about how running as root user is handled by sslsplit.
Actually the sslsplit command when run as root, drops privileges by switching to user nobody unless configured otherwise:
/*
* User to drop privileges to by default. This user needs to be allowed to
* create outbound TCP connections, and in some configurations, perform DNS
* resolution.
*
* Packagers may want to use a specific service user account instead of
* overloading nobody with yet another use case.
[...]
*/
#define DFLT_DROPUSER "nobody"
This is done to the forked "worker" subprocess. I'm not sure there is an interest to run this as root unless binding to a low port.
So what should be done is:
run sslsplit as a normal user
It won't change uid or gid, just ensure that the application to intercept doesn't run as this user. Change the redirect to:
iptables -t nat -A OUTPUT -p tcp -m owner ! --uid-owner normaluser --dport 443 -j REDIRECT --to-port 8443
or configure sslsplit to use a dedicated user and/or group when run as root.
See configuration file or options -u and -m for this purpose. I'd use (or create and use) the group proxy for this purpose. and then would use ! --gid-owner proxy.
| Local transparent proxy SSLSplit causes forwarding loop |
1,626,675,809,000 |
I'm trying to ping an external ip (in this case google) from inside a network namespace.
ip netns add ns1
# Create v-eth1 and v-peer1: v-eth1 is in the host space whereas peer-1 is supposed to be in the ns
ip link add v-eth1 type veth peer name v-peer1
# Move v-peer1 to ns
ip link set v-peer1 netns ns1
# set v-eth1
ip addr add 10.200.1.1/24 dev v-eth1
ip link set v-eth1 up
# Set v-peer1 in the ns
ip netns exec ns1 ip addr add 10.200.1.2/24 dev v-peer1
ip netns exec ns1 ip link set v-peer1 up
# Set loopback interface in the ns
ip netns exec ns1 ip link set lo up
# Add defaut route in the ns
ip netns exec ns1 ip route add default via 10.200.1.1
# Set host routing tables
iptables -t nat -A POSTROUTING -s 10.200.1.0/24 -j MASQUERADE
# Enable routing in the host
sysctl -w net.ipv4.ip_forward=1
#
ip netns exec ns1 ping 8.8.8.8
For some reasons this is working fine inside a VM in virtualbox (on my laptop), it's working on my desktop (ubuntu 18.04) but it does not work on my host os on laptop (which is too Ubuntu 18.04).
I tried traceroute and this is what I got:
on laptop
on desktop
Does any of you have any idea on what should I investigate in order to find the problem?
I don't have a firewall set as far as I know (ufw is disabled)
EDIT: this is what I get with iptables-save -c:
# Generated by iptables-save v1.6.1 on Fri Jan 17 18:05:36 2020
*filter
:INPUT ACCEPT [3774:2079111]
:FORWARD DROP [5:420]
:OUTPUT ACCEPT [3053:308301]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
[16984:18361691] -A FORWARD -j DOCKER-USER
[16984:18361691] -A FORWARD -j DOCKER-ISOLATION-STAGE-1
[12139:18094316] -A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
[0:0] -A FORWARD -o docker0 -j DOCKER
[4761:260319] -A FORWARD -i docker0 ! -o docker0 -j ACCEPT
[0:0] -A FORWARD -i docker0 -o docker0 -j ACCEPT
[0:0] -A FORWARD -o br-6a72e380ece6 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
[0:0] -A FORWARD -o br-6a72e380ece6 -j DOCKER
[0:0] -A FORWARD -i br-6a72e380ece6 ! -o br-6a72e380ece6 -j ACCEPT
[0:0] -A FORWARD -i br-6a72e380ece6 -o br-6a72e380ece6 -j ACCEPT
[4761:260319] -A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
[0:0] -A DOCKER-ISOLATION-STAGE-1 -i br-6a72e380ece6 ! -o br-6a72e380ece6 -j DOCKER-ISOLATION-STAGE-2
[16984:18361691] -A DOCKER-ISOLATION-STAGE-1 -j RETURN
[0:0] -A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
[0:0] -A DOCKER-ISOLATION-STAGE-2 -o br-6a72e380ece6 -j DROP
[4761:260319] -A DOCKER-ISOLATION-STAGE-2 -j RETURN
[16984:18361691] -A DOCKER-USER -j RETURN
COMMIT
# Completed on Fri Jan 17 18:05:36 2020
# Generated by iptables-save v1.6.1 on Fri Jan 17 18:05:36 2020
*nat
:PREROUTING ACCEPT [406:111092]
:INPUT ACCEPT [9:703]
:OUTPUT ACCEPT [29:2283]
:POSTROUTING ACCEPT [28:2114]
:DOCKER - [0:0]
[253:19770] -A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
[0:0] -A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
[4:249] -A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
[0:0] -A POSTROUTING -s 172.18.0.0/16 ! -o br-6a72e380ece6 -j MASQUERADE
[1:169] -A POSTROUTING -s 10.200.1.0/24 -j MASQUERADE
[0:0] -A DOCKER -i docker0 -j RETURN
[0:0] -A DOCKER -i br-6a72e380ece6 -j RETURN
COMMIT
# Completed on Fri Jan 17 18:05:36 2020
|
You have Docker which itself alters the firewall rules. I can't tell if that's because of Docker, but you have iptables' default policy for filter/FORWARD set to DROP preventing any routing not explicitly allowed.
EDIT: added the return direction.
To make your experiment work this should be enough (including the return traffic which must also be enabled):
iptables -A FORWARD -i v-eth1 -j ACCEPT
iptables -A FORWARD -o v-eth1 -j ACCEPT
Note that those could be complemented with the interface to/from internet but I don't have its name.
Usually using rules below is prefered, letting the return traffic be allowed by stateful tracking: conntrack, thus having to care only about initial traffic. Feel free to try it.
iptables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
iptables -A FORWARD -i v-eth1 -j ACCEPT
As a side note kernels >= 4.7 usually require/allow a few more settings to have conntrack helpers (ftp...) to work correctly/securely, but that's not needed for your experiment (ICMP is handled). Some informations in this blog: Secure use of iptables and connection tracking helpers.
In case of doubt (like interaction with Docker) use -I instead to be sure to insert your rules before anything else. Just be aware restarting Docker might alter the rules again. Now you know where the problem is, it's up to you to integrate this along boot and Docker.
You might be interested in reading Docker's documentation about its use of iptables: Docker and iptables.
| Unable to ping external network from namespace, probably postrouting not working |
1,626,675,809,000 |
I am trying to experiment with DNAT in PREROUTING. I found a tutorial here. It contains the following sentence:
This is done in the PREROUTING chain, just as the packet comes in; this means that anything else on the Linux box itself (routing, packet filtering) will see the packet going to its 'real' destination.
I want to ask what the author means by the last part i.e. anything else on the Linux box itself will see the packet going to its 'real' destination ?
I tried a test where I have a virtual device (tap) and I redirected incoming ICMP packets to that tap device (my tap device address is 10.0.4.1/24 and there is a program listening to the tap device, so its state is UP):
# iptables -t nat -A PREROUTING -i eth0 -p icmp -j DNAT --to-destination 10.0.4.2
When I ping an external IP, this rule never gets used (pkts count in iptables remains 0 for this rule). Is this observation related to what the author is saying ?
|
Your first question is already answered by the text you quoted:
This is done in the PREROUTING chain, just as the packet comes in;
this means that anything else on the Linux box itself (routing,
packet filtering) will see the packet going to its 'real'
destination.
I.e. routing and packet filtering.
For your second question: you seem to be pinging from the system itself. Hence the packets are not coming into the system, hence these packets don't pass through the PREROUTING chain. You will need to originate those packets from outside that system.
| Changing Destination IP address using iptables and DNAT |
1,626,675,809,000 |
I'm trying to create a simple iptable rule using the command sample below. But the routing does not work. Any inputs on what is missing as I'm not familiar with iptables.
sudo iptables -t nat -A PREROUTING -p tcp -d 10.10.20.10 --dport 8321 -j DNAT --to-destination 192.168.56.101:8321
The ip 10.10.20.10 is not assigned to any interface.
The iptables rules are as follows:
# Generated by iptables-save v1.6.1 on Tue Mar 5 14:21:30 2019
*nat
:PREROUTING ACCEPT [5:2009]
:INPUT ACCEPT [5:2009]
:OUTPUT ACCEPT [141:9332]
:POSTROUTING ACCEPT [141:9332]
:DOCKER - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A PREROUTING -d 10.10.20.10/32 -p tcp -m tcp --dport 8321 -j DNAT --to-destination 192.168.56.101:8321
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
COMMIT
# Completed on Tue Mar 5 14:21:30 2019
# Generated by iptables-save v1.6.1 on Tue Mar 5 14:21:30 2019
*filter
:INPUT ACCEPT [923:68802]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [810:87756]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
COMMIT
The ip addr ouput is
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:46:d2:d7 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3
valid_lft 85059sec preferred_lft 85059sec
inet6 fe80::a00:27ff:fe46:d2d7/64 scope link
valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:0e:42:40 brd ff:ff:ff:ff:ff:ff
inet6 fd0c:6493:12bf:2942::ac18:1164/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe0e:4240/64 scope link
valid_lft forever preferred_lft forever
4: enp0s9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:bf:83:a2 brd ff:ff:ff:ff:ff:ff
inet 192.168.56.101/24 brd 192.168.56.255 scope global dynamic enp0s9
valid_lft 908sec preferred_lft 908sec
inet6 fe80::a00:27ff:febf:83a2/64 scope link
valid_lft forever preferred_lft forever
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:8a:d2:57:bd brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
ip route output is:
10.0.2.0/24 dev enp0s3 proto kernel scope link src 10.0.2.15
10.0.2.2 dev enp0s3 proto dhcp scope link src 10.0.2.15 metric 100
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.56.0/24 dev enp0s9 proto kernel scope link src 192.168.56.101
|
Traffic from the same host as where the nat PREROUTING DNAT rule is set does not traverse that nat PREROUTING chain, which is why you are not seeing it being applied.
Instead you need to use the nat OUTPUT chain for locally-generated packets:
sudo iptables -t nat -A OUTPUT -p tcp -d 10.10.20.10 --dport 8321 -j DNAT --to-destination 192.168.56.101:8321
You can find iptables processing flowcharts by searching images with those keywords in google, which makes it clear how iptables work.
| iptable dnat rule not working ubuntu |
1,626,675,809,000 |
Basically I have a armbian distro configured as NAT where wlan0 is the internal interface and eth0 is the "pubic" interface that provides internet (this set is provided out of the box by armbian-config).
My devices connect over wlan0 grabbing an IP, say 172.24.1.114
I have added a VPN to a remote network resulting in the creation of ppp0, with IP 10.10.10.12
Having these info, what I want to achieve is:
Only one IP (e.g. 172.24.1.114) has to always go towards ppp0 (that is all traffic back and forth should go to ppp0, so I can either reach machines and navigate on internet with the remote IP)
All other IPs can normally go towards eth0
Starting from the configured NAT from armbian-config I have added the extra iptables rules:
-A FORWARD -i wlan0 -o ppp0 -j ACCEPT (this is before -A FORWARD -i wlan0-o eth0 -j ACCEPT created by armbian-config)
-A POSTROUTING -o ppp0 -j MASQUERADE (order shouldn't impact with -A POSTROUTING -o eth0 -j MASQUERADE created by armbian-config)
-A FORWARD -i ppp0 -o wlan0 -m state --state RELATED,ESTABLISHED -j ACCEPT (just to be sure!)
These extra rules + the one from armbian-config seem to work all most well:
From 172.24.1.114 client I can see content of a remote web server, say http://10.10.10.20 ( so apparently it goes thru ppp0)
From 172.24.1.114 client I can navigate on internet, but unfortunately checking the IP I go out with (using a geo ip website), it still results the one from eth0
All other clients correctly navigate going out thru eth0
So to summarize it, I can now reach the remote network over VPN for that IP but it is not able to navigate thru ppp0
As last try I found the way to set rule policies, like in this guide (http://wiki.wlug.org.nz/SourceBasedRouting), so I can specify that source IP 172.24.1.114 goes to custom table other than the main one; then I added in this new table the default gateway of 10.10.10.1 dev ppp0. This leads to lack of web navigation for that IP.
|
I have resolved all.
First the required iptables' rules are (these give access to the remote VPN's machines):
-A FORWARD -i wlan0 -o ppp0 -j ACCEPT
-A POSTROUTING -o ppp0 -j MASQUERADE
Then, to indicate which IP or range of IPs have to have a different route, you need policy rules:
open /etc/iproute2/rt_tables and put your entry (ID tablename):
100 my_custom_table
ip rule add from 172.24.1.114/24 table my_custom_table (tells to go to another table other than the main one for the source IP 172.x.x.x)
ip route add 172.24.1.0/24 dev wlan0 table my_custom_table (required to receive packets back from ppp0)
ip route add default via 10.10.10.1 dev ppp0 table my_custom_table (routes packet to the VPN's gateway)
Make sure the firewall on the VPN server allows incoming traffic from VPN IPs.
| NAT a specific IP to go to ppp0 and others to go to eth0 coming from internal wifi interface |
1,626,675,809,000 |
I made two Virtualbox virtual machines, CentOS 7 latest iso. I gave them each 2 network interfaces, one NAT and one "Internal network" in order to get them to talk to each other and have Internet access. I set manual IP addresses for each:
NAT interfaces: VM1: 10.0.2.15/24 - VM2: 10.0.2.16/24 Internal Network interfaces: VM1: 10.0.2.1/24 VM2 10.0.2.2/24
My problem is, my SSH port forwarding rules don't work unless I disable the "internal network" interfaces. Once I do, port forwarding starts working again.
How could I fix this? My goals are to have the 2 VMs able to communicate with each other and with Internet access.
|
try
NAT interfaces: VM1: 10.0.2.15/24 - VM2: 10.0.2.16/24
Internal Network interfaces: VM1: 10.0.3.1/24 VM2 10.0.3.2/24
You will have trouble using same network on differents interfaces.
| Virtualbox - how to give Internet access and internal network access to two VMs? |
1,626,675,809,000 |
I'm using this rule to configure SNAT:
iptables -t nat -A POSTROUTING -o eth1 -j SNAT --to-source 193.49.142.107:4000
I want to specify a rule to filter out packets not destined to the internal address and port that initiated the session. Additionally, for receiving packets from a specific external endpoint, it is necessary for the internal endpoint to send packets first to that specific external endpoint's IP address.(NAT Address Dependent Filtering)
Example:
A machine with internal IP and port (X:x) which is behind the NAT opens a connexion to a server with IP Y. So with the rule I must be able to allow only connexion coming from IP address Y and destined to (X:x). All other connexion will be droped.
|
iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE
iptables -P FORWARD -j DROP
iptables -A FORWARD -o eth1 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A FORWARD -i eth1 -m state --state ESTABLISHED -j ACCEPT
What do these rules do?
-A POSTROUTING -o eth1 -j MASQUERADE hides your internal IP's as packets leave your network
-P FORWARD -j DROP sets the default policy for your FORWARD chain to DROP
-A FORWARD -o eth1 -m state --state NEW,ESTABLISHED -j ACCEPT allows new and established FORWARDed connections out
-A FORWARD -i eth1 -m state --state ESTABLISHED -j ACCEPT allows only established FORWARDed connections in
The rules above are assuming that you're using this box as a gateway/firewall with eth1 connected to your WAN and eth0 connected to your LAN.
Additional Reading: Postrouting and IP Masquerading
EDIT
To configure "conditional" port forwarding:
By source port
iptables -A PREROUTING -t nat -i eth1 -p tcp --sport [trusted_source_port] --dport [external_port] -j DNAT --to [internal_ip]:[internal_port]
iptables -A FORWARD -p tcp -d [internal_ip] --dport [internal_port] -j ACCEPT
By source IP
iptables -A PREROUTING -t nat -i eth1 -p tcp -s [trusted_source_ip] --dport [external_port] -j DNAT --to [internal_ip]:[internal_port]
iptables -A FORWARD -p tcp -d [internal_ip] --dport [internal_port] -j ACCEPT
| Configure NAT filtring behavior with iptables |
1,626,675,809,000 |
I have one subdomain (dyndns). xyz.dyndns.com (f.e.)
Here my question begins:
I have a server for virtualisation.
On the server I have multiple VMs
where different web-services are available through https.
My router is configured with xyz.dyndns.com
VMs can be accessed over NAT or be Bridged to router
How is it possible to access VM_1's and VM_2's WebService like:
https://xyz.dyndns.com/vm_1_webservice or
https://xyz.dyndns.com/vm_2_webservice
Because I need to add redirection rules to router.
I know that with bridging the VMs I can simply redirect http to the VM on router.
In my local network I have configured anything with DNAT (iptables).
I.e. when I go trought localhost(server):40001(port) f.e. I will be redirected to VM_1's WebService which is accessible through SSL. Like:
https://127.0.0.1:40001/vm_1_webservice
It works good. How to go on?(with nginx or apache on host?)
|
The typical approach is to setup a web server such as Nginx or Apache on either the router/switch box, or have the router/switch box redirect ports 80 & 443 to a internal host that's running Nginx or Apache.
Once traffic has been setup so that it's passing to a web server, you can then setup virtual hosts within the web server, which can take care to route the traffic to the appropriate vm1_webservice, vm2_webservice, etc.
Nginx
I'll show you 1 basic Nginx method but you can get very elaborate with these rules once you grok how it works. Also take a look at this tutorial titled: How nginx processes a request which shows how you can configure Nginx to service multiple sites on a single port 80/443.
server {
server_name www.example.com
location / {
proxy_pass http://localhost:4567/;
}
}
server {
server_name www.example2.com
location {
proxy_pass http://localhost:4568/;
}
}
You'd change the proxy_pass lines to match whatever port @ host your vm1_webservice was running on, for example.
| Redirection to services in NAT or Bridged Network only with one subdomain |
1,626,675,809,000 |
When installing pivpn on Raspberry Pi it will create an iptables rule:
pi@RPi64:~ $ sudo iptables -L -t nat
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 10.122.242.0/24 anywhere /* wireguard-nat-rule */
I think it does this by inserting the rule via iptables-persistant:
pi@RPi64:~ $ cat /etc/iptables/rules.v4
# Generated by iptables-save v1.8.7 on Fri Aug 12 08:07:21 2022
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
-A POSTROUTING -s 10.122.242.0/24 -o eth0 -m comment --comment wireguard-nat-rule -j MASQUERADE
COMMIT
# Completed on Fri Aug 12 08:07:21 2022
This is on the server side, of course. If I want to fully connect from a client to this server, I need to add masquerading on the client by inserting a similar rule on the client:
pi@schwarz:~ $ sudo iptables -L -t nat
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- anywhere anywhere
I do it like this:
pi@schwarz:~ $ sudo cat /etc/wireguard/schwarz.conf
[Interface]
PrivateKey =
Address = 10.122.242.4/24
PostUp = iptables -t nat -A POSTROUTING -o schwarz -j MASQUERADE
PreDown = iptables -t nat -D POSTROUTING -o schwarz -j MASQUERADE
DNS = 9.9.9.9, 149.112.112.112
[Peer]
...
I then add static routes on both routers, so that traffic to the respective LAN is routed to the wireguard server or the client from other clients in those LANs.
This way I am able to fully connect to all devices in both LANs from any client in both LANs.
The problem with this approach is, that clients lose their original IP from the original LAN and will instead appear in the other network with the IP of the wireguard client (plus a port). This is of course due to NATing (masquerading).
Everything works fine this way.
Except one service: Logitechmediaserver. This server can not handle all clients that come from a remote LAN because they now have the same IP. To be more specific, the problem are only some Logitech clients (Radio). The clients connect fine on the server but they don't see the server responding. Other clients (Boom) connect fine. They use a different protocol.
This made me wonder why pivpn is even masquerading the IPs. Should it not suffice to have static routes from the LANs to the client/server and on those clients/server to the tunnels they create?
Why the masquerading? Is it done for the case of the Wireguard server acting also as a ISP router to the internet? This is not the case here. The router is always on a different machine.
Long story short, I was wondering if it should be possible in general to remove the masquerading with pivpn. Also, maybe someone can point to an error I have in my setup.
|
Masquerading in general is used for access to one network from a second network where the first network isn't set up to route replies back to the second. You apply masquerading to packets going out the gateway to the first network (rewriting the packets' source address to use the gateway's address), so that other hosts on that network will reply back to the gateway (which will translate the destination of reply packets back to the original source address).
You don't need masquerading if you are connecting two LANs, and each LAN is set up to route to the other through its own WireGuard gateway (the classic site-to-site WireGuard configuration).
You do need masquerading if you are connecting a LAN (or WireGuard network) to the Internet (ie routing to the Internet, not merely tunneling through the Internet).
With a site-to-site connection, if the LAN router on each LAN is also the WireGuard gateway, you usually would not use masquerading; usually you would just set up the WireGuard interface on each LAN router with a route (and appropriate AllowedIPs setting) to the other LAN, and add firewall rules to the routers that allow appropriate access from one site to the other.
In your case, where it sounds like you have a gateway (your Pis) at each site that is different than the LAN router, you can remove the need for masquerading by 1) adding the route to the other site to each LAN router (or alternatively to each individual device that needs to access the other site), and 2) adding the other site's LAN network to the AllowedIPs setting on the WireGuard gateway.
It sounds like you may have already done this; but to give a concrete example, if you are connecting two LANs, 10.100.100.0/24 and 10.200.200.0/24, and the WireGuard gateway in LAN 1 is 10.100.100.123 and the WireGuard gateway in LAN 2 is 10.200.200.234, you would add a route to the LAN router (or individual devices) in LAN 1 like the following (using the appropriate LAN-connected interface for the router or device, like eth1):
10.200.200.0/24 via 10.100.100.123 dev eth1
And a corresponding route to the LAN router (or individual devices) in LAN 2 like the following:
10.100.100.0/24 via 10.200.200.234 dev eth1
In the WireGuard config for LAN 1, you'd include the other site's network in the AllowedIPs setting for the other site:
[Interface]
Address = 10.122.242.1/24
...
[Peer]
AllowedIPs = 10.122.242.2, 10.200.200.0/24
...
And correspondingly, in the WireGuard config for LAN 2, you'd include LAN 1's network in the AllowedIPs setting for LAN 1:
[Interface]
Address = 10.122.242.2/24
...
[Peer]
AllowedIPs = 10.122.242.1, 10.100.100.0/24
...
With that configuration in place, you can safely remove the masquerading rules from your WireGuard gateways, and traffic can be routed from one site to the other and back without any NAT.
If you still wanted to use one of the WireGuard gateways as a gateway to the Internet, however, you could keep the masquerading rule, but simply carve out an exception for packets destined for the gateway's own LAN; for example, like this on the WireGuard gateway for LAN 2:
iptables -t nat -A POSTROUTING ! -d 10.200.200.0/24 -o eth0 -J MASQUERADE
One minor, unrelated nit about the WireGuard config you posted: you almost never want to include the DNS setting in a site-to-site configuration. You'd usually only use the DNS setting on the "point" side of a point-to-site configuration, for the purpose of using a different DNS resolver on the endpoint when its WireGuard interface is up than when it is down.
| Can I run PIVPN with Wireguard without MASQUERADING? |
1,626,675,809,000 |
I'm working from the answer of this question and man nft in order to create some dnat rules in my nftables config.
The relevant config extract is:
define src_ip = 192.168.1.128/26
define dst_ip = 192.168.1.1
define docker_dns = 172.20.10.5
table inet nat {
map dns_nat {
type ipv4_addr . ipv4_addr . ip_proto . inet_service : ipv4_addr . inet_service
flags interval
elements = {
$src_ip . $dst_ip . udp . 53 : $docker_dns . 5353,
}
}
chain prerouting {
type nat hook prerouting priority -100; policy accept;
dnat to ip saddr . ip daddr . ip protocol . th dport map @dns_nat;
}
}
When I apply this rule with nft -f, I see no command output so I presume it's succeeded. However when I inspect the ruleset using nft list ruleset the rules aren't present. When the dnat to ... line is commented out the rules appear to be applied, however when the line is present the rules are not applied.
The collection of rules in the prerouting chain I'm attempting to replace is:
ip saddr $src_ip ip daddr $dst_ip udp dport 53 dnat to $docker_dns:5353;
...
Version information:
# nft -v
nftables v1.0.6 (Lester Gooch #5)
# uname -r
6.1.0-11-amd64
Why might this not be working? Thanks
|
There are 3 problems.
no error is displayed
This looks to be a bug in nftables 1.0.6, see following bullets.
Here with the same version and OP's ruleset in /tmp/ruleset.nft:
# nft -V
nftables v1.0.6 (Lester Gooch #5)
[...]
# nft -f /tmp/ruleset.nft
/tmp/ruleset.nft:7:38-45: Error: unknown datatype ip_proto
type ipv4_addr . ipv4_addr . ip_proto . inet_service : ipv4_addr . inet_service
^^^^^^^^
/tmp/ruleset.nft:6:9-15: Error: set definition does not specify key
map dns_nat {
^^^^^^^
Error: unknown datatype ip_proto
The original linked Q/A used the correct type inet_proto. This should not have been replaced with ip_proto which is an unknown type. So replace back:
type ipv4_addr . ipv4_addr . ip_proto . inet_service : ipv4_addr . inet_service
with the correct original spelling:
type ipv4_addr . ipv4_addr . inet_proto . inet_service : ipv4_addr . inet_service
A list of available types can be found in nft(8) at PAYLOAD EXPRESSION and more precisely for this case at IPV4 HEADER EXPRESSION:
Keyword
Description
Type
[...]
protocol
Upper layer protocol
inet_proto
[...]
typeof ip protocol <=> type inet_proto (not type ip_proto).
Normally typeof should be preferred to type to avoid having to guess the correct type, but as I wrote in the linked Q/A, some versions of nftables might not cope correctly with this precise case. The replacement would have been:
typeof ip saddr . ip daddr . ip protocol . th dport : ip daddr . th dport
which is almost a cut/paste from the rule using it, but its behavior should be thoroughly tested.
no error is displayed - take 2
Once this previous error is fixed (and the result put in /tmp/ruleset2.nft), then, as OP wrote, trying again the ruleset fails silently:
# nft -V
nftables v1.0.6 (Lester Gooch #5)
cli: editline
json: yes
minigmp: no
libxtables: yes
# nft -f /tmp/ruleset2.nft
# echo $?
1
#
The only clue it failed is non-0 return code.
While with a newer nftables version:
# nft -V
nftables v1.0.8 (Old Doc Yak #2)
cli: editline
json: yes
minigmp: no
libxtables: yes
# nft -f /tmp/ruleset2.nft
/tmp/ruleset2.nft:16:9-12: Error: specify `dnat ip' or 'dnat ip6' in inet table to disambiguate
dnat to ip saddr . ip daddr . ip protocol . th dport map @dns_nat;
^^^^
#
Now the error is displayed. Whatever was the issue in 1.0.6 it has been fixed at least with version 1.0.8.
Error: specify `dnat ip' or 'dnat ip6' in inet table to disambiguate
Because NAT is done in the inet family (combined IPv4+IPv6) rather than either ip (IPv4) or ip6 (IPv6) family, one parameter which is usually optional becomes mandatory: state the IP version NAT should be applied to (even if one could infer it from the map table layout (IPv4)). Documentation tells:
NAT STATEMENTS
snat [[ip | ip6] to] ADDR_SPEC [:PORT_SPEC] [FLAGS]
dnat [[ip | ip6] to] ADDR_SPEC [:PORT_SPEC] [FLAGS]
masquerade [to :PORT_SPEC] [FLAGS]
redirect [to :PORT_SPEC] [FLAGS]
[...]
When used in the inet family (available with kernel 5.2), the dnat and
snat statements require the use of the ip and ip6 keyword in case an
address is provided, see the examples below.
So:
dnat to ip saddr . ip daddr . ip protocol . th dport map @dns_nat;
should be replaced with:
dnat ip to ip saddr . ip daddr . ip protocol . th dport map @dns_nat
The original Q/A didn't state the family, so it would be assumed it was the default ip family which wouldn't require this.
Of course, this will work with nftables 1.0.6, only the error reporting had a problem. The return code will now be 0.
| nftables dnat map rule failing silently |
1,626,675,809,000 |
I would like to change source address of every packet generated by a process in given cgroup (version 2). Is that even possible?
I have:
nftables 1.0.2,
linux 5.15 (Ubuntu variant)
/system.slice/system-my-service.slice/[email protected] cgroup
I have tried to:
create a table nft add table ip myservice
create a postrouting nat chain nft add chain ip myservice postrouting { type nat hook postrouting priority 100 \; }
try to create postrouting rule nft add rule ip myservice postrouting socket cgroupv2 level 1 'system.slice' snat 10.0.0.1 (during experiments, I have used only 'system.slice', because nft has issues with @ in the cgroup name, which would be level 2 issue :-).
I have found also cgroup matching which requires int32 argument, from which I guess it's cgroup version 1 (thus not applicable for me), because I have found no trace of hint for converting the path style cgroup to int.
I suspect the socket expression is not applicable to the postrouting nat chain, as the nft suggests in Error: Could not process rule: Operation not supported.
Do all the fails mean, that this is completely wrong approach?
Or have I just missed something obvious?
|
Here's an answer to your two problems:
syntax
cgroupv2 expects a path, which is a string. A string is always displayed with double-quotes, and requires double-quotes if it includes special characters. These double-quotes are for the nft command's consumption, not for the shell. With direct commands (ie: not in a file read using nft -f), these double-quotes themselves should be escaped or enclosed with single-quote, else the shell would consume them.
In addition this path is documented as relative and doesn't need a leading / (anyway it's accepted and removed when displayed back), giving when directly from shell:
socket cgroupv2 level 3 '"system.slice/system-my-service.slice/[email protected]"'
Finally, nft doesn't care if it's given a single parameter with multiple tokens or multiple parameters one token at a time: the line is assembled and parsed the same. So whenever there's a special character for the shell (here "), just single-quote all the line rather than try to figure out where to escape characters (eg \; in base chains).
work around the limitation
You can mark the packet in output hook and check the mark in postrouting hook to do NAT.
nft 'add chain ip myservice { type filter hook output priority 0; policy accept; }'
nft 'add rule ip myservice output socket cgroupv2 level 3 "system.slice/system-my-service.slice/[email protected]" meta mark set 0xcafe'
nft add rule ip myservice postrouting meta mark 0xcafe snat to 10.0.0.1
The final result can be put in its own file, with the two glue commands at start for idempotence. Having its own file prevents any interaction with shell parsing, and gets atomic updates. It still requires "..." around the path parameter to not having nftables interpret the @ character.
myservice.nft:
table ip myservice
delete table ip myservice
table ip myservice {
chain postrouting {
type nat hook postrouting priority 100; policy accept;
meta mark 0xcafe snat to 10.0.0.1
}
chain output {
type filter hook output priority 0; policy accept;
socket cgroupv2 level 3 "system.slice/system-my-service.slice/[email protected]" meta mark set 0xcafe
}
}
which can then be loaded with nft -f myservice.nft, as long as the cgroup already exists (probably meaning, that at boot the service has to be started before nftables loads this rule).
In the end:
every outgoing packet will traverse filter/output
if it's from a process in the adequate cgroup the packet will receive a mark 0xcafe
every first packet of a new flow will traverse the nat/postrouting rule.
if it matches the mark 0xcafe (meaning it was in the adequate cgroup) this will trigger the SNAT rule for the flow (then the mark doesn't matter anymore)
| Can nftables perform postrouting matching on crgroupv2? |
1,626,675,809,000 |
When creating a dnat rule, you can specify the following command:
nft 'add rule ip twilight prerouting ip daddr 1.2.3.0/24 dnat ip prefix to ip daddr map { 1.2.3.0/24 : 2.3.4.0/24 }'
And then get dnat that maps addresses like 1.2.3.4 -> 2.3.4.4. This command runs as expected with nftables v1.0.4 (Lester Gooch #3), and according to the answer here.
If I try to do the same with ipv6, using the following commands:
nft 'add rule ip6 twilight prerouting ip6 daddr aa:bb:cc:dd::/64 dnat ip6 prefix to ip6 daddr map { [aa:bb:cc:dd::]/64 : [bb:cc:dd:ee::]/64 }'
nft 'add rule ip6 twilight prerouting ip6 daddr aa:bb:cc:dd::/64 dnat ip6 prefix to ip6 daddr map { aa:bb:cc:dd::/64 : bb:cc:dd:ee::/64 }'
nft 'add rule ip6 twilight prerouting ip6 daddr aa:bb:cc:dd::/64 dnat ip6 prefix to ip6 daddr map { "aa:bb:cc:dd::/64" : "bb:cc:dd:ee::/64" }'
Then, I get the following error messages:
Error: syntax error, unexpected newline
add rule ip6 twilight prerouting ip6 daddr aa:bb:cc:dd::/64 dnat ip6 prefix to ip6 daddr map { [aa:bb:cc:dd::]/64 : [bb:cc:dd:ee::]/64 }
^
Error: syntax error, unexpected newline
add rule ip6 twilight prerouting ip6 daddr aa:bb:cc:dd::/64 dnat ip6 prefix to ip6 daddr map { aa:bb:cc:dd::/64 : bb:cc:dd:ee::/64 }
^
Error: syntax error, unexpected newline
add rule ip6 twilight prerouting ip6 daddr aa:bb:cc:dd::/64 dnat ip6 prefix to ip6 daddr map { "aa:bb:cc:dd::/64" : "bb:cc:dd:ee::/64" }
^
Is there a way that I can make anonymous ipv6 maps in nftables?
|
TL;DR: You need at least nftables version >= 1.0.5.
In version 1.0.5:
scanner: allow prefix in ip6 scope
Which matches this commit:
scanner: allow prefix in ip6 scope
'ip6 prefix' is valid syntax, so make sure scanner recognizes it also
in ip6 context.
Also add test case.
[...]
diff --git a/tests/shell/testcases/sets/0046netmap_0 b/tests/shell/testcases/sets/0046netmap_0
index 2804a4a2..60bda401 100755
--- a/tests/shell/testcases/sets/0046netmap_0
+++ b/tests/shell/testcases/sets/0046netmap_0
@@ -8,6 +8,12 @@ EXPECTED="table ip x {
10.141.13.0/24 : 192.168.4.0/24 > }
}
}
+ table ip6 x {
+ chain y {
+ type nat hook postrouting priority srcnat; policy accept;
+ snat ip6 prefix to ip6 saddr map { 2001:db8:1111::/64 : 2001:db8:2222::/64 }
+ }
+ }
"
set -e
The corresponding regression test is similar to OP's attempt. OP's syntax tested ok here with nftables 1.0.7.
| nftables anonymous map for ipv6 dnat |
1,626,675,809,000 |
I am using VirtualBox on Windows now.
The network is roughly like this:
[Fedora 37 VM] -- NAT network -- [Windows Host] ---- intranet ---- internet
I use DNS on intranet to resole host.domain names like both some.host.on.intranet and www.yahoo.co.jp .
On my windows host, this is OK.
But I am not so luky on my Fedora VM.
shao@fedora Music $ resolvectl status
Global
Protocols: LLMNR=resolve -mDNS -DNSOverTLS DNSSEC=no/unsupported
resolv.conf mode: stub
Link 2 (enp0s3)
Current Scopes: DNS LLMNR/IPv4 LLMNR/IPv6
Protocols: +DefaultRoute +LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
Current DNS Server: 10.0.2.1
DNS Servers: 10.0.2.1 10.3.1.24 192.168.3.1
DNS Domain: intra.somedomain.co.jp
Link 3 (docker0)
Current Scopes: none
Protocols: -DefaultRoute +LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
My primary DNS is 10.0.2.1, which is OK, same as my Windows host.
I can resovle www.yahoo.co.jp on Linux VM.
shao@fedora Music $ ping www.yahoo.co.jp
PING edge12.g.yimg.jp (183.79.250.251) 56(84) bytes of data.
64 bytes from 183.79.250.251: icmp_seq=1 ttl=54 time=17.4 ms
64 bytes from 183.79.250.251: icmp_seq=2 ttl=54 time=20.5 ms
When I try to resolve host.domain on intranet. I got:
shao@fedora Music $ ping dev-dm-energy101z.dev.jp.local
ping: dev-dm-energy101z.dev.jp.local: Temporary failure in name resolution
What makes me confuse is that I can 'dig' that host.domain name.
shao@fedora Music $ dig @10.0.2.1 dev-dm-energy101z.dev.jp.local
; <<>> DiG 9.18.11 <<>> @10.0.2.1 dev-dm-energy101z.dev.jp.local
; (1 server found)
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 34400
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4000
;; QUESTION SECTION:
;dev-dm-energy101z.dev.jp.local. IN A
;; ANSWER SECTION:
dev-dm-energy101z.dev.jp.local. 721 IN A 100.67.254.168
;; Query time: 11 msec
;; SERVER: 10.0.2.1#53(10.0.2.1) (UDP)
;; WHEN: Thu Mar 09 10:27:42 JST 2023
;; MSG SIZE rcvd: 75
I also checked tcpdump when I performed these instructoins.
I can see UDP traffic when I 'ping yahoo' or 'dig intranet_host', like this:
10:40:31.283922 enp0s3 Out IP 10.9.9.4.45466 > 10.0.2.1.53: 7945+ [1au] A? www.yahoo.co.jp. (44)
10:40:31.284623 enp0s3 Out IP 10.9.9.4.35216 > 10.0.2.1.53: 59710+ [1au] AAAA? www.yahoo.co.jp. (44)
10:40:31.292909 enp0s3 In IP 10.0.2.1.53 > 10.9.9.4.45466: 7945 2/0/1 CNAME edge12.g.yimg.jp., A 183.79.217.124 (88)
...
10:45:14.514350 enp0s3 Out IP 10.9.9.4.54319 > 10.0.2.1.53: 3623+ [1au] A? dev-dm-energy101z.dev.jp.local. (71)
10:45:14.531879 enp0s3 In IP 10.0.2.1.53 > 10.9.9.4.54319: 3623 1/0/1 A 100.67.254.168 (75)
But when I 'ping intranet_host' , tcpdump -i any -nn udp keeps silence.
Did I miss some config?
Any hint will help, thanks in adance.
===========================================================
2023-03-15:
I found something interesting.
Fedora just refuses to resolve host.domain names end in local, like:
stg-zed2-jpe2.stg.jp.local
or dev-dm-energy.dev.jp.local.
Is there a convention of DNS likes that?
|
You can't use .local like that. It's reserved for Multicast DNS (mDNS) lookups in an environment without managed DNS servers.
It's actually reserved for use with the full set of Zeroconf technologies, but the one relevant here is mDNS.
| Fedora VM behind NAT can not ping host.domain name on intranet |
1,626,675,809,000 |
I've set up a PPTPd server on an Ubuntu 18.04 machine with kernel 5.0x, am able to connect to it from a Win10 machine, ping the server and ping 8.8.8.8. So DNS works over VPN.
However, I'm not able to retrieve any web pages or connect to other ports.
I've done the modprobe things (nf_nat_pptp and nf_conntrack_pptp).
The server has a local address of 192.168.0.1 and the client receives 192.168.0.100, as per defaults. The iptables below show some other address ranges that were used for testing.
I'm pretty sure it's an iptables issue but I don't have experience tracing these kind of issues by just looking at the rules list.
# sysctl -p
net.ipv4.ip_forward = 1
net.netfilter.nf_conntrack_helper = 1
# iptables -nvL -t nat --line-number
Chain PREROUTING (policy ACCEPT 239K packets, 23M bytes)
num pkts bytes target prot opt in out source destination
1 252K 25M PREROUTING_direct all -- * * 0.0.0.0/0 0.0.0.0/0
2 252K 25M PREROUTING_ZONES all -- * * 0.0.0.0/0 0.0.0.0/0
Chain INPUT (policy ACCEPT 50342 packets, 2729K bytes)
num pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 62252 packets, 4500K bytes)
num pkts bytes target prot opt in out source destination
1 62476 4516K OUTPUT_direct all -- * * 0.0.0.0/0 0.0.0.0/0
Chain POSTROUTING (policy ACCEPT 21854 packets, 1580K bytes)
num pkts bytes target prot opt in out source destination
1 40409 2921K MASQUERADE all -- * eth0 0.0.0.0/0 0.0.0.0/0
2 70 4886 MASQUERADE all -- * eth0 0.0.0.0/0 0.0.0.0/0
3 5 325 MASQUERADE all -- * eth0 0.0.0.0/0 0.0.0.0/0
4 22005 1590K POSTROUTING_direct all -- * * 0.0.0.0/0 0.0.0.0/0
5 22005 1590K POSTROUTING_ZONES all -- * * 0.0.0.0/0 0.0.0.0/0
6 6 432 MASQUERADE all -- * eth0 0.0.0.0/0 0.0.0.0/0
7 0 0 MASQUERADE all -- * eth0 0.0.0.0/0 0.0.0.0/0
8 0 0 MASQUERADE all -- * eth0 0.0.0.0/0 0.0.0.0/0
Chain OUTPUT_direct (1 references)
num pkts bytes target prot opt in out source destination
Chain POSTROUTING_ZONES (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 POST_public all -- * * 0.0.0.0/0 192.168.0.0/24 [goto]
2 22005 1590K POST_public all -- * + 0.0.0.0/0 0.0.0.0/0 [goto]
Chain POSTROUTING_direct (1 references)
num pkts bytes target prot opt in out source destination
Chain POST_public (2 references)
num pkts bytes target prot opt in out source destination
1 22005 1590K POST_public_pre all -- * * 0.0.0.0/0 0.0.0.0/0
2 22005 1590K POST_public_log all -- * * 0.0.0.0/0 0.0.0.0/0
3 22005 1590K POST_public_deny all -- * * 0.0.0.0/0 0.0.0.0/0
4 22005 1590K POST_public_allow all -- * * 0.0.0.0/0 0.0.0.0/0
5 22005 1590K POST_public_post all -- * * 0.0.0.0/0 0.0.0.0/0
Chain POST_public_allow (1 references)
num pkts bytes target prot opt in out source destination
Chain POST_public_deny (1 references)
num pkts bytes target prot opt in out source destination
Chain POST_public_log (1 references)
num pkts bytes target prot opt in out source destination
Chain POST_public_post (1 references)
num pkts bytes target prot opt in out source destination
Chain POST_public_pre (1 references)
num pkts bytes target prot opt in out source destination
Chain PREROUTING_ZONES (1 references)
num pkts bytes target prot opt in out source destination
1 11025 842K PRE_public all -- * * 192.168.0.0/24 0.0.0.0/0 [goto]
2 241K 24M PRE_public all -- + * 0.0.0.0/0 0.0.0.0/0 [goto]
Chain PREROUTING_direct (1 references)
num pkts bytes target prot opt in out source destination
Chain PRE_public (2 references)
num pkts bytes target prot opt in out source destination
1 252K 25M PRE_public_pre all -- * * 0.0.0.0/0 0.0.0.0/0
2 252K 25M PRE_public_log all -- * * 0.0.0.0/0 0.0.0.0/0
3 252K 25M PRE_public_deny all -- * * 0.0.0.0/0 0.0.0.0/0
4 252K 25M PRE_public_allow all -- * * 0.0.0.0/0 0.0.0.0/0
5 252K 25M PRE_public_post all -- * * 0.0.0.0/0 0.0.0.0/0
Chain PRE_public_allow (1 references)
num pkts bytes target prot opt in out source destination
Chain PRE_public_deny (1 references)
num pkts bytes target prot opt in out source destination
Chain PRE_public_log (1 references)
num pkts bytes target prot opt in out source destination
Chain PRE_public_post (1 references)
num pkts bytes target prot opt in out source destination
Chain PRE_public_pre (1 references)
num pkts bytes target prot opt in out source destination
# iptables -nvL --line-number
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
num pkts bytes target prot opt in out source destination
1 5673 557K ACCEPT 47 -- * * 0.0.0.0/0 0.0.0.0/0
2 5071 511K ACCEPT 47 -- * * 0.0.0.0/0 0.0.0.0/0
3 83 4400 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:1723 state NEW
4 1532 141K ACCEPT 47 -- * * 0.0.0.0/0 0.0.0.0/0
5 1 52 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:1723 state NEW
6 1759 163K ACCEPT 47 -- eth0 * 0.0.0.0/0 0.0.0.0/0
7 555K 94M ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED,DNAT
8 21994 1589K ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0
9 110K 6423K INPUT_direct all -- * * 0.0.0.0/0 0.0.0.0/0
10 72059 4171K INPUT_ZONES all -- * * 0.0.0.0/0 0.0.0.0/0
11 8933 364K DROP all -- * * 0.0.0.0/0 0.0.0.0/0 ctstate INVALID
12 11808 1026K REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
13 0 0 ACCEPT 47 -- * * 0.0.0.0/0 0.0.0.0/0
14 0 0 ACCEPT 47 -- * * 0.0.0.0/0 0.0.0.0/0
15 0 0 ACCEPT 47 -- * * 0.0.0.0/0 0.0.0.0/0
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
num pkts bytes target prot opt in out source destination
1 2072 108K TCPMSS tcp -- * * 192.168.0.0/24 0.0.0.0/0 tcp flags:0x06/0x02 TCPMSS clamp to PMTU
2 0 0 TCPMSS tcp -- * * 10.0.0.0/24 0.0.0.0/0 tcp flags:0x06/0x02 TCPMSS clamp to PMTU
3 3739 194K TCPMSS tcp -- * * 192.168.0.0/24 0.0.0.0/0 tcp flags:0x06/0x02 TCPMSS clamp to PMTU
4 0 0 TCPMSS tcp -- * * 192.168.1.0/24 0.0.0.0/0 tcp flags:0x06/0x02 TCPMSS clamp to PMTU
5 0 0 ACCEPT 47 -- * * 0.0.0.0/0 0.0.0.0/0
6 80 5808 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED,DNAT
7 0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0
8 8862 510K FORWARD_direct all -- * * 0.0.0.0/0 0.0.0.0/0
9 8862 510K FORWARD_IN_ZONES all -- * * 0.0.0.0/0 0.0.0.0/0
10 8819 507K FORWARD_OUT_ZONES all -- * * 0.0.0.0/0 0.0.0.0/0
11 0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 ctstate INVALID
12 8819 507K REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
13 0 0 TCPMSS tcp -- * * 192.168.0.0/24 0.0.0.0/0 tcp flags:0x17/0x02 TCPMSS set 1356
14 0 0 ACCEPT all -- ppp+ eth0 0.0.0.0/0 0.0.0.0/0
15 0 0 ACCEPT all -- eth0 ppp+ 0.0.0.0/0 0.0.0.0/0
16 0 0 ACCEPT all -- ppp+ * 0.0.0.0/0 0.0.0.0/0
Chain OUTPUT (policy ACCEPT 603K packets, 125M bytes)
num pkts bytes target prot opt in out source destination
1 8423 599K ACCEPT 47 -- * eth0 0.0.0.0/0 0.0.0.0/0
2 43988 3795K ACCEPT all -- * lo 0.0.0.0/0 0.0.0.0/0
3 627K 136M OUTPUT_direct all -- * * 0.0.0.0/0 0.0.0.0/0
4 0 0 ACCEPT 47 -- * * 0.0.0.0/0 0.0.0.0/0
5 0 0 ACCEPT 47 -- * * 0.0.0.0/0 0.0.0.0/0
Chain FORWARD_IN_ZONES (1 references)
num pkts bytes target prot opt in out source destination
1 8862 510K FWDI_public all -- * * 192.168.0.0/24 0.0.0.0/0 [goto]
2 0 0 FWDI_public all -- + * 0.0.0.0/0 0.0.0.0/0 [goto]
Chain FORWARD_OUT_ZONES (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 FWDO_public all -- * * 0.0.0.0/0 192.168.0.0/24 [goto]
2 8819 507K FWDO_public all -- * + 0.0.0.0/0 0.0.0.0/0 [goto]
Chain FORWARD_direct (1 references)
num pkts bytes target prot opt in out source destination
Chain FWDI_public (2 references)
num pkts bytes target prot opt in out source destination
1 8862 510K FWDI_public_pre all -- * * 0.0.0.0/0 0.0.0.0/0
2 8862 510K FWDI_public_log all -- * * 0.0.0.0/0 0.0.0.0/0
3 8862 510K FWDI_public_deny all -- * * 0.0.0.0/0 0.0.0.0/0
4 8862 510K FWDI_public_allow all -- * * 0.0.0.0/0 0.0.0.0/0
5 8862 510K FWDI_public_post all -- * * 0.0.0.0/0 0.0.0.0/0
6 43 3348 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0
Chain FWDI_public_allow (1 references)
num pkts bytes target prot opt in out source destination
Chain FWDI_public_deny (1 references)
num pkts bytes target prot opt in out source destination
Chain FWDI_public_log (1 references)
num pkts bytes target prot opt in out source destination
Chain FWDI_public_post (1 references)
num pkts bytes target prot opt in out source destination
Chain FWDI_public_pre (1 references)
num pkts bytes target prot opt in out source destination
Chain FWDO_public (2 references)
num pkts bytes target prot opt in out source destination
1 8819 507K FWDO_public_pre all -- * * 0.0.0.0/0 0.0.0.0/0
2 8819 507K FWDO_public_log all -- * * 0.0.0.0/0 0.0.0.0/0
3 8819 507K FWDO_public_deny all -- * * 0.0.0.0/0 0.0.0.0/0
4 8819 507K FWDO_public_allow all -- * * 0.0.0.0/0 0.0.0.0/0
5 8819 507K FWDO_public_post all -- * * 0.0.0.0/0 0.0.0.0/0
Chain FWDO_public_allow (1 references)
num pkts bytes target prot opt in out source destination
Chain FWDO_public_deny (1 references)
num pkts bytes target prot opt in out source destination
Chain FWDO_public_log (1 references)
num pkts bytes target prot opt in out source destination
Chain FWDO_public_post (1 references)
num pkts bytes target prot opt in out source destination
Chain FWDO_public_pre (1 references)
num pkts bytes target prot opt in out source destination
Chain INPUT_ZONES (1 references)
num pkts bytes target prot opt in out source destination
1 1214 114K IN_public all -- * * 192.168.0.0/24 0.0.0.0/0 [goto]
2 70845 4058K IN_public all -- + * 0.0.0.0/0 0.0.0.0/0 [goto]
Chain INPUT_direct (1 references)
num pkts bytes target prot opt in out source destination
1 37532 2252K REJECT tcp -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 25,465,587,143,993,110,995 match-set f2b-postfix-sasl src reject-with icmp-port-unreachable
2 1 60 REJECT tcp -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 22 match-set f2b-sshd src reject-with icmp-port-unreachable
Chain IN_public (2 references)
num pkts bytes target prot opt in out source destination
1 72059 4171K IN_public_pre all -- * * 0.0.0.0/0 0.0.0.0/0
2 72059 4171K IN_public_log all -- * * 0.0.0.0/0 0.0.0.0/0
3 72059 4171K IN_public_deny all -- * * 0.0.0.0/0 0.0.0.0/0
4 72059 4171K IN_public_allow all -- * * 0.0.0.0/0 0.0.0.0/0
5 20855 1397K IN_public_post all -- * * 0.0.0.0/0 0.0.0.0/0
6 114 6638 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0
Chain IN_public_allow (1 references)
num pkts bytes target prot opt in out source destination
1 2000 84664 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 ctstate NEW,UNTRACKED
2 21 868 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:21 ctstate NEW,UNTRACKED
3 27 1460 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:993 ctstate NEW,UNTRACKED
4 12 496 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:465 ctstate NEW,UNTRACKED
5 676 36856 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 ctstate NEW,UNTRACKED
6 11957 717K ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:25 ctstate NEW,UNTRACKED
7 186 10756 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:110 ctstate NEW,UNTRACKED
8 4329 238K ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 ctstate NEW,UNTRACKED
9 25 1368 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:995 ctstate NEW,UNTRACKED
10 21 1088 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:143 ctstate NEW,UNTRACKED
11 3 120 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:20 ctstate NEW,UNTRACKED
12 114 5008 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpts:10000:10100 ctstate NEW,UNTRACKED
13 26 1376 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:20000 ctstate NEW,UNTRACKED
14 31 2022 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:53 ctstate NEW,UNTRACKED
15 7 288 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:53 ctstate NEW,UNTRACKED
16 31049 1629K ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpts:1025:65535 ctstate NEW,UNTRACKED
17 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:2222 ctstate NEW,UNTRACKED
18 720 43028 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:587 ctstate NEW,UNTRACKED
Chain IN_public_deny (1 references)
num pkts bytes target prot opt in out source destination
Chain IN_public_log (1 references)
num pkts bytes target prot opt in out source destination
Chain IN_public_post (1 references)
num pkts bytes target prot opt in out source destination
Chain IN_public_pre (1 references)
num pkts bytes target prot opt in out source destination
Chain OUTPUT_direct (1 references)
num pkts bytes target prot opt in out source destination
|
Definitely something wrong with iptables rules. I see a lot of overlapping. For example, take a close look at POSTROUTING chain. Rules #1, 2 and 3 are identical. Also, rules #6, 7 and 8 are identical.
Bear in mind that rules are processed from top to down. If you take another close look at POSTROUTING_ZONES chain. Rule #1 has no hit (0 pkts). It means traffic is being blocked by earlier rules. I suggest you do a rule cleanup, and re-add rules making sure that you do not duplicate.
| PPTP VPN Ubuntu no web access |
1,626,675,809,000 |
I have a Linux box (let's call it Network Probe) that i would like to use for network monitoring. I need to redirect ALL the traffic for ALL the protocols coming to ETH0 to another machine (Network Mon) on the same subnet. Also i need to connect on the probe in SSH.
It's possible to redirect all the network traffic except the SSH?
I tried with the following iptables rules; the NAT component works well but i cannot access SSH any more.
iptables -A INPUT -p tcp --dport 22 -j ACCEPT
iptables -t nat -A PREROUTING -p all -j DNAT --to-destination x.x.x.x
Thanks in advance for your help.
|
Insert an ACCEPT before DNAT
iptables -t nat -I PREROUTING -p tcp --dport 22 -j ACCEPT
-I inserts it to the begining of the chain
| Redirect all network traffic except ssh |
1,626,675,809,000 |
Let's say, that one has following network topology:
NAT gateway linux-router has a following SNAT rule in place:
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
SNAT all -- 10.99.99.50 anywhere to:1.1.1.6
In addition, as seen on the drawing, the 1.1.1.6 address is configured on lo interface. Technically, this is not needed, i.e one can delete it and the linux-svr still has the connectivity. Thus, is there a point to configure SNAT source address in NAT gateway? Only for troubleshooting purposes as it is easier to associate and trace back 1.1.1.6 to linux-svr?
|
netfilter is route-agnostic. That's the important thing that explains what happens below. netfilter's NAT handling alters addresses, and in some cases, when this is done before a route decision, this in turn alters the route decision. netfilter doesn't do route decisions itself: that's only the role of the routing stack.
I'm assuming below that linux-router has no additional firewalling rule (in the default iptables filter table), because it was never mentioned in the question. Also to avoid multiplying cases to address, I'm assuming there's no other system to consider beside linux-srv (and linux-router) in the 10.99.99.0/24 LAN (it wouldn't be difficult to address them too).
About removing 1.1.1.6
SNAT happens at POSTROUTING, after any routing decision. If SNAT sees an IP matching the given criteria, it will add a conntrack entry to handle replies. Something similar to this happens on linux-router (using conntrack -E -e NEW):
[NEW] tcp 6 120 SYN_SENT src=10.99.99.50 dst=8.8.8.8 sport=57490 dport=80 [UNREPLIED] src=8.8.8.8 dst=1.1.1.6 sport=80 dport=57490
It's not netfilter's job to ensure that replies will really come back. That's again the routing stack job (including outside routing where linux-router has no control).
Before being deleted, 1.1.1.6 was an IP of linux-router. The interface where this IP was added to didn't really matter as Linux is following the weak host model: it can answer queries to this IP received on any interface. Removing this entry won't prevent to receive packets for 1.1.1.6 as M10i has a specific route to reach 1.1.1.6: using 1.1.215.48 which belongs to linux-router. So linux-router never gets an ARP request for this IP: the ARP request coming from M10i is always 1.1.215.48 (and to tell the same, M10i's ARP table will only have cached 1.1.215.48, not 1.1.1.6). That means that the existence of this IP won't matter: linux-router will always receive traffic for 1.1.1.6. But now there's a difference:
if incoming packet doesn't match a previously created conntrack entry
If the packet is not related to previous activity from linux-srv, this packet will reach the first route decision, as seen in this schematic. According to its current routing table this should be this:
# ip route get from 198.51.100.101 iif eth0 1.1.1.6
1.1.1.6 from 198.51.100.101 via 1.1.215.60 dev eth0
cache iif eth0
If it had been M10i (or any system in the 1.1.215.32/27 LAN), linux-router would also have added ICMP redirects from time to time, as this can tell:
# ip route get from 1.1.215.60 iif eth0 1.1.1.6
1.1.1.6 from 1.1.215.60 via 1.1.215.60 dev eth0
cache <redirect> iif eth0
Anyway, for packets coming from internet, packets will be sent back to M10i, which is probably implementing Strict Reverse Path Forwarding: this routed-back packet will be dropped by M10i, since its source (198.51.100.101) is on the wrong side of its routing table and thus filtered by Strict Path Forwarding. Without Strict Reverse Path Forwarding, this would have caused a loop between M10i and linux-router until the packet's TTL was decremented to 0 and the packet then also dropped.
If incoming packet does match a previously flow NATed and tracked by conntrack.
Previous example: a reply packet received from 8.8.8.8 tcp port 80 to 1.1.1.6 port 57490, which would be tracked by conntrack -E:
[UPDATE] tcp 6 60 SYN_RECV src=10.99.99.50 dst=8.8.8.8 sport=57490 dport=80 src=8.8.8.8 dst=1.1.1.6 sport=80 dport=57490
[UPDATE] tcp 6 432000 ESTABLISHED src=10.99.99.50 dst=8.8.8.8 sport=57490 dport=80 src=8.8.8.8 dst=1.1.1.6 sport=80 dport=57490 [ASSURED]
At some pre-routing point, conntrack will handle "de-SNAT" (as a reminder, this packet will never even traverse again iptables' nat table, this is also written in the previous schematic: "nat" table only consulted for "NEW" connections). The destination IP is now changed to 10.99.99.50, and the packet reaches the first route decision: it gets routed to linux-srv. Everything works fine.
So I explained what happens when you remove 1.1.1.6: doesn't affect linux-srv as an internet client, but creates some minor disruption between M10i and linux-router for unrelated ingress packets.
If you want some clients on internet to reach linux-srv using a DNAT rule on linux-router, then for the affected connections (eg: a web server on linux-srv tcp port 80), everything will work without disruption. For other attempts, again there's the minor issue between M10i and linux-router.
About removing the source IP selector/filter to the SNAT rule
An information wasn't provided: if there's also a selector/filter on the outgoing interface, or not. The two rules below would get the same output from iptables -t nat -n -L (but not from iptables -t nat -n -v -L or better iptables-save):
iptables -t nat -A POSTROUTING -o eth0 -s 10.99.99.254 -j SNAT --to-source 1.1.1.6
or
iptables -t nat -A POSTROUTING -s 10.99.99.254 -j SNAT --to-source 1.1.1.6
Actually it won't matter in this case if you now use either of these two commands:
iptables -t nat -A POSTROUTING -o eth0 -j SNAT --to-source 1.1.1.6
iptables -t nat -A POSTROUTING -j SNAT --to-source 1.1.1.6
with 1.1.1.6 still belonging to linux-router
Because a private IP destination address cannot be seen coming on the wire from eth0's side, linux-router can effectively only route one IP address: linux-srv's 10.99.99.50 and this routing can only happen when it's initiated from 10.99.99.50 first, so that it's SNATed to a public IP. Since iptables will create a new conntrack entry only on initial connection (state NEW), after this the conntrack entry won't be changed anymore and everything will work fine.
with 1.1.1.6 removed from linux-router
For linux-srv everything will still work as expected when it connects to Internet: the previous explanation also applies.
For any unknown incoming connection from outside to 1.1.1.6 (eg, from 198.51.100.101):
Routing stack determines that 1.1.1.6 should be routed to M10i (see explanation made earlier). A tentative conntrack entry is added in state NEW and the packet reaches nat/POSTROUTING: packet is SNATed to 1.1.1.6 and sent back to M10i. M10i has a route to 1.1.1.6 and sends again the alterned packet to linux-router with both source and destination IP as 1.1.1.6 (as source is on the correct side of its routing tables, it's not even dropped by Strict Reverse Path Forwarding). linux-router receives a packet ... from there I can't tell if it's a bug or not but here's what's captured in an experiment reproducing your case with conntrack -E, with a single TCP SYN packet received from 198.51.100.101:
# conntrack -E
[NEW] tcp 6 120 SYN_SENT src=198.51.100.101 dst=1.1.1.6 sport=48202 dport=5555 [UNREPLIED] src=1.1.1.6 dst=1.1.1.6 sport=5555 dport=48202
[NEW] tcp 6 120 SYN_SENT src=1.1.1.6 dst=1.1.1.6 sport=48202 dport=5555 [UNREPLIED] src=1.1.1.6 dst=1.1.1.6 sport=5555 dport=60062
[NEW] tcp 6 120 SYN_SENT src=1.1.1.6 dst=1.1.1.6 sport=60062 dport=5555 [UNREPLIED] src=1.1.1.6 dst=1.1.1.6 sport=5555 dport=23442
[NEW] tcp 6 120 SYN_SENT src=1.1.1.6 dst=1.1.1.6 sport=23442 dport=5555 [UNREPLIED] src=1.1.1.6 dst=1.1.1.6 sport=5555 dport=54429
[NEW] tcp 6 120 SYN_SENT src=1.1.1.6 dst=1.1.1.6 sport=54429 dport=5555 [UNREPLIED] src=1.1.1.6 dst=1.1.1.6 sport=5555 dport=7652
[NEW] tcp 6 120 SYN_SENT src=1.1.1.6 dst=1.1.1.6 sport=7652 dport=5555 [UNREPLIED] src=1.1.1.6 dst=1.1.1.6 sport=5555 dport=34503
[NEW] tcp 6 120 SYN_SENT src=1.1.1.6 dst=1.1.1.6 sport=34503 dport=5555 [UNREPLIED] src=1.1.1.6 dst=1.1.1.6 sport=5555 dport=49256
[NEW] tcp 6 120 SYN_SENT src=1.1.1.6 dst=1.1.1.6 sport=49256 dport=5555 [UNREPLIED] src=1.1.1.6 dst=1.1.1.6 sport=5555 dport=58399
[NEW] tcp 6 120 SYN_SENT src=1.1.1.6 dst=1.1.1.6 sport=58399 dport=5555 [UNREPLIED] src=1.1.1.6 dst=1.1.1.6 sport=5555 dport=54522
[...]
Even if the netfilter's behaviour isn't normal, there's really a loop happening between M10i and linux-router (till TTL drops to 0).
Conclusion
Don't remove the local IP address 1.1.1.6. You're creating routing problems, and it's not netfilter's role to correct those routing problems. Even if you add firewalling rules preventing those loops, that's not a sane behaviour to use incorrect routes.
Likewise, you could choose to remove the source IP selector for the SNAT rule, but better not if there is also no interface selected: (ie if you chose this rule: iptables -t nat -A POSTROUTING -j SNAT --to-source 1.1.1.6). it's only working because there are private IP addresses, non-routable on Internet, in play. If that were not the case, any connection from outside trying to reach the LAN behind linux-router's eth2 interface would be SNATed to 1.1.1.6.
That would also be the case for example if you added a DNAT rule to have some services from linux-srv reachable from Internet, preventing linux-srv to ever see a source address different from 1.1.1.6. Here's a concrete example in a simulation (with a sane restoration of 1.1.1.6 to linux-router):
# ip -br a
lo UNKNOWN 127.0.0.1/8 1.1.1.6/32
eth0@if5 UP 1.1.215.48/27
eth2@if4 UP 10.99.99.254/24
# iptables -t nat -A PREROUTING -d 1.1.1.6 -p tcp --dport 80 -j DNAT --to-destination 10.99.99.50
# iptables-save | grep -v ^#
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A PREROUTING -d 1.1.1.6/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination 10.99.99.50
-A POSTROUTING -j SNAT --to-source 1.1.1.6
COMMIT
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
COMMIT
# conntrack -E
[NEW] tcp 6 120 SYN_SENT src=198.51.100.101 dst=1.1.1.6 sport=45752 dport=80 [UNREPLIED] src=10.99.99.50 dst=1.1.1.6 sport=80 dport=45752
[UPDATE] tcp 6 60 SYN_RECV src=198.51.100.101 dst=1.1.1.6 sport=45752 dport=80 src=10.99.99.50 dst=1.1.1.6 sport=80 dport=45752
[UPDATE] tcp 6 432000 ESTABLISHED src=198.51.100.101 dst=1.1.1.6 sport=45752 dport=80 src=10.99.99.50 dst=1.1.1.6 sport=80 dport=45752 [ASSURED]
While it might not be clear, that means the expected replies are from 10.99.99.50 to 1.1.1.6 (not to 198.51.100.101): linux-srv stays blind on which IP address really connected to it, it will always see 1.1.1.6.
| Is there a point to configure SNAT source address in NAT gateway? |
1,626,675,809,000 |
I'm running this project for creating access points which is built on top of hostapd.
Running this works as expected, my ethernet connection is available as wifi:
sudo create_ap wlan0 eth0 wifiname
I was hoping that port 80 on my host machine would automatically be exposed to the client but it isn't.
How can I create a hostapd hotspot that exposes port 80? I'm thinking I might need to use iptables or dnsmasq but I'm not sure.
I'm using the project linked as a starting point but my main goal is to broadcast a port over a wifi-hot spot.
Update: I found that by default the host is available at IP 192.168.12.1. I'm now looking for a way to forward all (or at minimum localhost) traffic on the hotspot to this IP.
But I still need to be able to resolve other domains on the host itself.
|
I was able to get the behavior I wanted using dnsmasq. Originally I was confused because I was adding the following to the default dnsmasq.conf location:
address=/#/192.168.12.1
It should forward all traffic to the IP 192.168.12.1 but I found it wasn't working.
Later on while running top with the program running I found that create_ap had called dnsmasq but with a custom dnsmasq.conf in a /tmp/ folder.
Reading through the source I found this snippet:
MTU=$(get_mtu $INTERNET_IFACE)
[[ -n "$MTU" ]] && echo "dhcp-option-force=option:mtu,${MTU}" >> $CONFDIR/dnsmasq.conf
[[ $ETC_HOSTS -eq 0 ]] && echo no-hosts >> $CONFDIR/dnsmasq.conf
[[ -n "$ADDN_HOSTS" ]] && echo "addn-hosts=${ADDN_HOSTS}" >> $CONFDIR/dnsmasq.conf
if [[ "$SHARE_METHOD" == "none" && "$REDIRECT_TO_LOCALHOST" == "1" ]]; then
cat << EOF >> $CONFDIR/dnsmasq.conf
address=/#/$GATEWAY
Inside that statement I added the following line to add my configuration to the temporary dnsmasq file:
echo "address=/#/${GATEWAY}" >> $CONFDIR/dnsmasq.conf
After adding that any http address on the AP was forwarding to 192.168.12.1 the browser automatically assumes port 80 when one isn't provided so that became a non-issue.
| Forward ports over hostapd? |
1,626,675,809,000 |
I am having difficulties configuring NAT with iptables on my firewall.
My firewall setup is as follow:
it is a layer 2 transparent firewall, between my gateway and my ISP's gateway
I bridged two interfaces as br0. The two interfaces are eno0 on my ISP side and eno1 on my local network side
I set up basically no iptables rules except one for NAT
Here are my rules:
root@firewall:~# iptables -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
root@firewall:~# iptables -t nat -S
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-A POSTROUTING -s 10.50.1.0/24 -j SNAT --to-source xxx.195.142.205
root@firewall:~# iptables -t mangle -S
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
The problem is, in short, that address translation works for outgoing traffic but not for the replies. Here is a test example:
I connected a laptop with IP 10.50.1.7 on my LAN and used it to ping 8.8.8.8
on the firewall, with tcpdump -i eno1, I see ICMP requests from 10.50.1.7 to 8.8.8.8, but no replies
on the firewall, with tcpdump -i eno0, I see ICMP requests from xxx.195.142.205 to 8.8.8.8, and the ICMP replies from 8.8.8.8 to xxx.195.142.205
Obviously, on the laptop, I do not get the ICMP replies
So the replies are not translated back to the local IP. What am I missing ?
Thanks for your help!
(NB: when removing the NAT rule and using the public IP xxx.195.142.205 on the laptop, I have full internet access)
|
As suggested by @dirkt, it looks like conntrack does not work well with a bridge. So iptables rules that don't require seem to work on a bridge, but not NAT.
Problem solved as soon as I configured my firewall as a layer 3 firewall.
In case others are interested: I extensively searched the Web if it was possible to use a transparent layer 2 firewall with NAT, but never got a straight answer.
ebtables website do suggest that it is possible:
bridge-nf code makes iptables see the bridged IP packets and enables
transparent IP NAT.
I never found out which ebtables command would make it work though.
| Use NAT with iptables and a bridge |
1,626,675,809,000 |
I have two separate freeBSD VMs. In VM1 I added 2 interface.
em0: 192.168.1.0/24
em1: 192.168.2.0/24.
In VM2 I have one interface with
192.168.2.0/24
IP address.
In VM1 I started a NAT and now I want to see if it is work properly or not. I'm able to connect to internet with my VM2 ( internet is come through em0 on my VM1) so I know my router which is in VM1 work correctly. Is there any command to show my list of NAT translations or any other way to show that my IP has been translated?
|
You could use tcpdump -i em0 on VM1 and observe that all packets to and from VM1 have an address of VM1's em0 interface.
If you observe the same packets on em1, you will see this address translated.
| Setting up a NAT in VM |
1,626,675,809,000 |
I set up a virtual ethernet (veth) pair between default namespace and another namespace named RoutableNS as follow:
-------------- --------------
- veth0 - -------------- veth1 -
- 10.5.1.1 - - 10.5.1.2 -
-------------- --------------
default NS RoutableNS
I can ping outside world in namespace RoutableNS through interface veth1 but It turns out when I SNAT (or MASQUERADE) incoming traffic to 10.5.1.1 (or 10.5.1.2) nothing will come to veth interface.
I tried same thing with tun devices and I saw It's not possible to MASQUERADE to tun device when It's IP is not routable to outside world (in default namespace).
So I have two questions:
Is this behaviour of SNAT (MASQUERADE) documented somewhere? I mean the behaviour that new source IPs should be routable to outside world in current namespace.
Is there a networking options (sysctls) letting me do this?
|
It's perfectly possible to masquerade or SNAT a device whose IP is not routable to the outside world. And being in a network namespace or not makes no difference.
You conveniently forgot to tell us what exactly you tried, but keep in mind that SNAT and MASQUERADE only work in the POSTROUTING table (while DNAT only works in the PREROUTING table), a fact which is well documented, and which you can't avoid to mention explicitely in the iptable commands.
That means SNAT will happen as the last step before the packet leaves the interface, and DNAT will happen as a very early steps for packets entering the interface from the outside.
So the usual setup is that a router (host or namespace) NATs IPs that come in from one side, to everything on the other side:
+---------------+
| |
masq'ed IP --<--| eth0 eth1 |--<-- original IP
10.0.0.99 | | 10.0.0.1
+---------------+
Host or Namespace
and you need a corresponding DNAT for incoming connections, so:
iptables -t nat -A POSTROUTING -o eth0 -s 10.0.0.1/32 -j SNAT --to 10.0.0.99
iptables -t nat -A PREROUTING -i eth0 -d 10.0.0.99/32 -j DNAT --to 10.0.0.1
You didn't say exactly what IPs you want to masquerade as what IPs, but if your main namespace acts as such a router, and you want to mask "RouteableNS", that is 10.5.1.2, to the outside world, then this is doable by using the outgoing IF of your main namespace.
| SNAT to unroutable interface |
1,466,655,932,000 |
All
I have a Cisco 877 router and a Linode VPS running OpenBSD 5.9 with a GRE tunnel running in between, which works great and I can ping from either side. I have set up a static route in the Cisco router to route traffic to WhatsMyIP.org (so I can see if it's working) but, try as I might, I can't get OpenBSD's PF to apply NAT to traffic from the GRE tunnel. The configuration parses, the traffic routes, but I don't get any states being generated.
Is what I'm trying to achieve even possible? My topology and /etc.pf.conf are below. (NOTE: Updated as per Bink's answer)
# $OpenBSD: pf.conf,v 1.54 2014/08/23 05:49:42 deraadt Exp $
#
# See pf.conf(5) and /etc/examples/pf.conf
set skip on lo
block return # block stateless traffic
ext_if = "em0"
int_if = "gre0"
int_net = "192.168.2.0/24"
pass out on $ext_if from $int_net to any nat-to ($ext_if)
pass # establish keep-state
# By default, do not permit remote connections to X11
block return in on ! lo0 proto tcp to port 6000:6010
pass quick on gre proto gre no state
Topology:
ifconfig output (IPs redacted):
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 32768
priority: 0
groups: lo
inet6 ::1 prefixlen 128
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x3
inet 127.0.0.1 netmask 0xff000000
em0: flags=18843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,MPSAFE> mtu 1500
lladdr f2:3c:91:0a:5b:a9
priority: 0
groups: egress
media: Ethernet autoselect (1000baseT full-duplex)
status: active
inet E.F.G.H netmask 0xffffff00 broadcast E.F.G.255
enc0: flags=0<>
priority: 0
groups: enc
status: active
pflog0: flags=141<UP,RUNNING,PROMISC> mtu 33144
priority: 0
groups: pflog
gre0: flags=9011<UP,POINTOPOINT,LINK0,MULTICAST> mtu 1476
priority: 0
groups: gre
tunnel: inet A.B.C.D -> E.F.G.H
inet 172.16.56.1 --> 172.16.56.2 netmask 0xffffff00
|
It seems that:
pass out on $ext_if from $int_net to any nat-to $ext_if
... doesn't work, it has to be this:
match out on $ext_if from $int_net to any nat-to $ext_if
pass on $ext_if from $int_net to any
Also, it helps to make sure net.inet.ip.forwarding is set to 1.
| Applying NAT to traffic from GRE tunnel in OpenBSD PF |
1,466,655,932,000 |
I want make a system that has a few subdomains. I set each subdomain to IP address using DNS.
I used random IP addresses for the question
165.93.198.34 x.mydomain.com (Which is actually 165.93.198.220:8080)
165.93.198.38 z.mydomain.com (Which is actually 165.93.198.220:81)
165.93.198.44 c.mydomain.com (Which is actually 165.93.198.220:443)
165.93.198.220 mydomain.com
Using iptables, when a request comes to IP address 165.93.198.34 I want it to be answered from 165.93.198.220:8080.
iptables -t nat -A PREROUTING -p tcp -d 165.93.198.34 --jump DNAT --to-destination 165.93.198.220:8080
But I couldn't make the prerouting work.
[root@static ~]# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
ACCEPT icmp -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:http
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ftp
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:https
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:down
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:webcache
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:81
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT)
target prot opt source destination
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
[root@static ~]# iptables -L -t nat
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
DNAT tcp -- anywhere 165.93.198.34-iprovider.com to:165.93.198.220:8080
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
What am I doing wrong?
|
If your target IP (165.93.198.220) is another system in the network
add an ACCEPT rule in the FORWARD chain like this:
iptables -A FORWARD -p tcp -d 165.93.198.220 --dport 8080 -j ACCEPT
also check if ip forward is enabled:
sysctl net.ipv4.ip_forward
if it is not set to 1, enable it on the fly with:
sysctl -w net.ipv4.ip_forward=1
or
echo 1 > /proc/sys/net/ipv4/ip_forward
to make it persistent for reboots edit /etc/sysctl.conf and add the line:
net.ipv4.ip_forward = 1
If your target IP (165.93.198.220) is on the local machine
add an ACCEPT rule in the INPUT chain like this:
iptables -A INPUT -p tcp --dport 8080 -j ACCEPT
| Answer redirect IP to specific port |
1,466,655,932,000 |
So, one of my servers is behind NAT, and since there is already a publicly accessible apache server going on my LAN, I decided to access it from the outside with different ports, and remap them to the standard port of the apache on this new machine I want to get a cert on. I did that with classic port forwarding via my router.
Now, if I want to use letsencrypt on said server, it obviously fails because it tries to use the standard port, which will direct to my other server's apache installation (which btw. already has a letsencrypt-cert).
Now I guess I need some way to tell letsencrypt to use my self-defined port instead of the standard one to connect from the outside, but I haven't found anything yet. Is that even possible? If it is, how?
|
It's not possible to use non-standard port, as conforming ACME server will still try to contact default 80 / 443 for http-01 / tls-sni-01 challenges.
E.g.certbot has a separate options for to listen to non-standard port, but that still doesn't help to pass the challenge:
certonly:
Options for modifying how a cert is obtained
--tls-sni-01-port TLS_SNI_01_PORT
Port used during tls-sni-01 challenge. This only
affects the port Certbot listens on. A conforming ACME
server will still attempt to connect on port 443.
(default: 443)
--http-01-port HTTP01_PORT
Port used in the http-01 challenge.This only affects
the port Certbot listens on. A conforming ACME server
will still attempt to connect on port 80. (default:
80)
Probably in your case the best way would be to use another verification method -- webroot.
In this case you don't need your 80 and 443 to be available to the outside world, but just a specific directory (which might be configured with proxy on webserver side, I assume).
Details are available here
| Changing the port letsencrypt tries to connect on |
1,466,655,932,000 |
Background: I am trying to set up an access point on Linux. The ultimate aim is to run SSLStrip (for an exercise) so I need to be able to do something like this, to redirect port 80 traffic through port 10000, on which SSLStrip listens.
iptables -t nat -A PREROUTING -p tcp --destination-port 80 -j REDIRECT --to-port 10000
Using Linux - Kali2, 64bit. Machine has internet access via eth0.
This is how I am setting up the access point:
airmon-ng start wlan0 6
modprobe tun
ifconfig wlan0mon down
iwconfig wlan0mon mode monitor
ifconfig wlan0mon up
airbase-ng -e "MyAccessPoint" -c 6 wlan0mon &
ifconfig "at0" up
ifconfig "at0" "10.0.0.1" netmask "255.255.255.0"
ifconfig "at0" mtu 1500
route add -net "10.0.0.0" netmask "255.255.255.0" gw "10.0.0.1" dev "at0"
Into /etc/dhcp/dhcpd.conf I put the following:
subnet 10.0.0.0 netmask 255.255.255.0 {
authoritative;
range 10.0.0.100 10.0.0.200;
default-lease-time 3600;
max-lease-time 7200;
option subnet-mask 255.255.255.0;
option broadcast-address 10.0.0.255;
option routers 10.0.0.1;
option domain-name-servers 8.8.8.8;
option domain-name “freeinternet.co.uk”
}
Check that /etc/default/isc-dhcp-server has INTERFACES=”at0”
Start the DHCP Server:
/etc/init.d/isc-dhcp-server restart
Configure NAT and enable ip forwarding:
iptables --flush
iptables --table nat --flush
iptables --delete-chain
iptables --table nat --delete-chain
iptables -P FORWARD ACCEPT
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
echo "1" > /proc/sys/net/ipv4/ip_forward
At this stage I believe I should be able to connect a client to my access point and browse the web.
The client sees the access point, connects, gets an IP address from the server's DHCP (10.0.0.101) and then has the following routing table:
mark@laptop15:~/TT$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.0.0.1 0.0.0.0 UG 0 0 0 wlan0
10.0.0.0 0.0.0.0 255.255.255.0 U 9 0 0 wlan0
However it cannot even ping the server (10.0.0.1) - no errors, we just don't get the pings back:
mark@laptop15:~/TT$ ping 10.0.0.1
PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
And it stays like that forever.
What am I doing wrong please?
|
Turns out it was network-manager interfering. Switched it off
service network-manager stop
and everything works fine.
I suspect there exists a less drastic solution than disabling it altogether, but it does what I need for now.
| Linux access point (airmon-ng/airmon-ng) not working |
1,466,655,932,000 |
eth0 (192.168.1.0/24) --> wan
eth2 (192.168.10.0/24) --> lan0
I use these rules to enable NAT on my linux gateway:
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
iptables -A FORWARD -i eth2 -o eth0 -j ACCEPT
iptables -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT
why on some online howto I read some rules like these?
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
iptables -A FORWARD -s 192.168.10.0/24 -d 0/0 -j ACCEPT
iptables -A FORWARD -m state --state ESTABLISHED,RELATED -d 192.168.10.0/24 -j ACCEPT
iptables -A FORWARD -s 192.168.10.0/24 -j ACCEPT
iptables -A FORWARD -d 192.168.10.0/24 -j ACCEPT
|
If you use these two lines:
iptables -A FORWARD -s 192.168.10.0/24 -j ACCEPT
iptables -A FORWARD -d 192.168.10.0/24 -j ACCEPT
Then these two has no value as far as the security is concerned:
iptables -A FORWARD -s 192.168.10.0/24 -d 0/0 -j ACCEPT
iptables -A FORWARD -m state --state ESTABLISHED,RELATED -d 192.168.10.0/24 -j ACCEPT
At first you have iptables -A FORWARD -s 192.168.10.0/24 -d 0/0 -j ACCEPT, this will make iptables forwarding packets originating from 192.168.10.0/24 subnet destined to 0/0 meaning every other networks. Again later you have iptables -A FORWARD -s 192.168.10.0/24 -j ACCEPT,actually in practice these two rules mean the same.
For iptables -A FORWARD -m state --state ESTABLISHED,RELATED -d 192.168.10.0/24 -j ACCEPT will make iptables to accept and forward any packet destined to 192.168.10.0/24 only if the packet is originated from a ESTABLISHED or RELATED source. Later you have somewhat conflicting rule iptables -A FORWARD -d 192.168.10.0/24 -j ACCEPT which will forward any packets destined for 192.168.10.0/24 making the first rule useless as any packet destined for 192.168.10.0/24, that will not match the first rule will match this one and the packet will be forwarded no matter what.
| linux nat different rules |
1,466,655,932,000 |
Desired scenario: a small subnetwork of Linux machines, all accessible through another Linux machine (acting as an IP router). These machines would be pre-configured with addresses on a private network (192.168.x.x or 10.x.x.x). However, each would be accessible though the routing machine with public IP addresses, one for each, configured on the routing machine.
This would be similar to NAT or IP masquerading, but with separate public IP addresses. (It is acceptable to assume that the public network will have a gateway address: external router.)
It seems like this should be doable with address translation but I cannot figure out how to configure this. I am not able to find anything searching.
Can this be configured, and if so, how?
|
Assuming, IP_EXT1 and IP_EXT2 are the external IP addresses for respectively machines #1 and #2, and IP_INT1 and IP_INT2 their respective internal IP addresses.
IP_EXT1 and IP_EXT2 are in fact addresses of the routing machine, either aliases for the same network interface or two distinct interfaces.
Then, the iptables configuration on the routing machine should be as simple as (untested):
iptables -t nat -A POSTROUTING --destination $IP_EXT1 -j DNAT --to-destination $IP_INT1
iptables -t nat -A POSTROUTING --source $IP_INT1 -j SNAT --to-source $IP_EXT1
iptables -t nat -A POSTROUTING --destination $IP_EXT2 -j DNAT --to-destination $IP_INT2
iptables -t nat -A POSTROUTING --source $IP_INT2 -j SNAT --to-source $IP_EXT2
| Network translation, one to one, not one to many |
1,466,655,932,000 |
I have a linux SUSE host which has both ipv4 and v6 enabled, below are the interfaces:- eth0,app,eth1 however the default route is available for ipv4 via eth0. Kubernetes is running on this host(single node), and the cluster is in ipv6. I need help with some kind of mechanism to access my cluster from outside host, so will port forwarding my request from localhost:port to ipv6ClusterIP:port work?
The below ip table rules didnot work
sudo ip6tables -t nat -A OUTPUT -p tcp --dport <localhost_port> -j DNAT --to-destination [ipv6_addr]:<port>
I have tried socat , but that cannot be a permanent solution as reboot scenario should also handled here.
|
I was able to achieve this through socat,
socat TCP-LISTEN:<localhost_port>,reuseaddr,fork TCP6:[ipv6 cluster IP]:<port>
Just a concern, if we use this, as a daemon service, will there be any security concerns or impact ?
| Packet forwarding from dual stack interface to localhost |
1,466,655,932,000 |
Our router machine has multiple public IPs (/27) on its WAN interface. Now, I want to add dnat rules which match specific dport/saddr/daddr combinations.
My dream would be something like this:
map one2one_dnat {
# dst_addr . src_addr . proto . dst_port : dnat_to . dnat_to_port
type ipv4_addr . ipv4_addr . inet_proto . inet_service : ipv4_addr . inet_service
flags interval
counter
comment "1-1 dnat"
elements = {
42.42.42.5 . 0.0.0.0/0 . tcp . 8888 : 10.42.42.5 . 8888
}
}
# And then later in a chain
ip daddr . ip saddr . ip protocol . th dport dnat to @one2one_dnat
However, this results in:
root@XXX# nft -c -f assembled.nft
assembled.nft:252:59-60: Error: syntax error, unexpected to, expecting newline or semicolon
ip daddr . ip saddr . ip protocol . th dport dnat to @one2one_dnat
^^
The following syntax examples do work (however not with the intended fancy all-in-one map):
dnat ip addr . port to ip saddr . tcp dport map { 42.42.42.5 . 8888 : 10.42.42.5 . 8888}
# And even with saddr restrictions
ip saddr 0.0.0.0/0 dnat ip addr . port to ip saddr . tcp dport map { 42.42.42.5 . 8888 : 10.42.42.5 . 8888}
Any ideas/suggestions are highly appreciated
|
The idea was here but with a wrong syntax used for the named map case, while the proper syntax was used for the anonymous map case.
A map replaces a key with this key's value if found (or the expression just evaluates to false, stopping further processing). Even when used along a dnat rule a map named keytovalue must be used with proper syntax: key map @keytovalue. These 3 parts will then be replaced with the value according to the packet's properties and consumed by the other part of the rule.
OP's attempt doesn't follow the syntax:
ip daddr . ip saddr . ip protocol . th dport dnat to @one2one_dnat
It should be written like this instead:
dnat to ip daddr . ip saddr . ip protocol . th dport map @one2one_dnat
No surprise here: it's the same syntax OP successfully used with anonymous maps: the key (made of concatenations) followed by the keyword map followed by the map reference (which is the definition in the anonymous case). dnat [to] will be the consumer of the resulting ip:port value (only when a match happened).
Further notes.
For other readers, this also requires recent enough nftables support, both in userland and kernel parts, for the purpose of doing NAT: nftables 0.9.4 and Linux kernel 5.6:
NAT mappings with concatenations. This allows you to specify the address and port to be used in the NAT mangling from maps, eg.
nft add rule ip nat pre dnat ip addr . port to ip saddr map { 1.1.1.1 : 2.2.2.2 . 30 }
You can also use this new feature with named sets:
nft add map ip nat destinations { type ipv4_addr . inet_service : ipv4_addr . inet_service \; }
nft add rule ip nat pre dnat ip addr . port to ip saddr . tcp dport map @destinations
Replacing the type syntax with a typeof syntax along concatenations, which is usually preferable for readability and to avoid having to figure out all the involved type names, some of them poorly documented, doesn't appear to work currently for OP's case: the use of ip protocol and th appears to clash between the map and the rule at least with nftables 1.0.7 and kernel 6.1.x. So better not use typeof here and keep type, or else split the map into two separate maps, one for UDP one for TCP to avoid this clash.
Splitting would also probably be needed for a similar IPv6 setup, since ip6 nexthdr can't be used safely to replace ip protocol, and the correct replacement, meta l4proto won't play along either.
| Nftables: Dnat with source address restriction and just one map |
1,466,655,932,000 |
My goal is to run two Docker containers on separate networks and have my host (Ubuntu 22.04) perform NAT so that the first network can reach the second.
My setup:
docker network create network1
docker network create network2
docker run --rm -it --network network1 ubuntu:22.04 bash
# In other terminal
docker run --rm -it --network network2 ubuntu:22.04 bash
The first container has an address of 172.19.0.2 and the second has an address of 172.20.0.2.
Running ip addr on my host, I see that the interfaces are br-deadbeef and br-feedbeef, respectively.
I then run
echo 1 > /proc/sys/net/ipv4/ip_forward
iptables -A FORWARD -i br-deadbeef -o br-feedbeef -j ACCEPT
iptables -A FORWARD -i br-feedbeef -o br-deadbeef -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -t nat -A POSTROUTING -o br-feedbeef -j MASQUERADE
on the host as root.
However, ping 172.20.0.2 from the first container doesn't succeed. Running Wireshark on the host shows the ICMP packet on the br-deadbeef network going from 172.19.0.2 to 172.20.0.2 but there's no reply.
What am I missing?
|
The issue is the following iptables rules:
Chain FORWARD (policy DROP)
target prot opt source destination
DOCKER-USER all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
...
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE2 all -- anywhere anywhere
...
Chain DOCKER-ISOLATION-STAGE-2 (3 references)
target prot opt source destination
DROP all -- anywhere anywhere
...
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere
If you run iptables with the verbose option (-v), you'll see that the DROP target under DOCKER-ISOLATION-STAGE-2 refers to the br-deadbeef interface (there's a DROP rule right under it for br-feedbeef).
Since you added your rules with -A, they were appended to the bottom of the chain which means that the jump to DOCKER-ISOLATION-STAGE-1 and hence to DOCKER-ISOLATION-STAGE-2 took priority.
A simple fix would be to add your rules with -I instead. This will insert them at the top of the chain.
| Set up NAT between Docker networks |
1,466,655,932,000 |
88.198.49.xxx = Hetzner (will run virtual machines on this)
141.94.176.xxx = OVH (contains block below)
164.132.xxx.0/28 = IP block to use on Hetzner as virtual machines
To get GRE set up I ran the following:
OVH:
ip tunnel add gre1 mode gre remote 88.198.49.xxx local 141.94.176.xxx ttl 255
ip link set gre1 up
ip route add 164.132.xxx.0/28 dev gre1
iptables -A FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
Hetzner:
ip tunnel add gre1 mode gre remote 141.94.176.xxx local 88.198.49.xxx ttl 255
ip link set gre1 up
ip rule add from 164.132.xxx.0/28 table 666
ip route add default dev gre1 table 666
ip route add 164.132.xxx.0/28 dev vmbr0 table 666
iptables -A FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
/etc/network/interfaces (Hetzner)
auto vmbr0
iface vmbr0 inet static
address 164.132.xxx.1/28
bridge-ports none
bridge-stp off
bridge-fd 0
When I traceroute 164.132.xxx.1 it works great doesn't show Hetzner IP
7 1 ms 1 ms 1 ms 10.95.34.50
8 2 ms 2 ms 2 ms 10.73.1.135
9 2 ms 2 ms 2 ms 10.72.66.67
10 2 ms 2 ms 2 ms 10.164.42.155
11 1 ms 1 ms 1 ms xxxxxx [141.94.176.xxx]
12 17 ms 17 ms 17 ms xxxxxx [164.132.xxx.1]
However when I traceroute the virtual machine using 164.132.xxx.2 I get the following result
7 1 ms 1 ms 1 ms 10.95.34.32
8 2 ms 2 ms 2 ms 10.73.1.45
9 2 ms 2 ms 2 ms 10.72.66.67
10 2 ms 2 ms 1 ms 10.164.42.163
11 1 ms 1 ms 1 ms xxxxxx [141.94.176.xxx]
12 14 ms 14 ms 14 ms xxxxxx [88.198.49.xxx]
13 15 ms 15 ms 15 ms xxxxxx [164.132.xxx.2]
How can I hide it so that 88.198.49.xxx is not shown? I believe this can be done with NAT but I do not want to use a NAT address as the virtual machines address. I want to keep the config as it is below for virtual machines if possible.
IP: 164.132.xxx.2/28
Gateway: 164.132.xxx.1
|
The IP addresses 88.198.49.xxx and 164.132.xxx.1 represent the same system: the hypervisor acting as router at Hetzner.
So when the target is this system itself, it will answer back with the address it was contacted with: 164.132.xxx.1. When it's acting as a router for 164.132.xxx.2, it will generate an ICMP TTL expired in transit and will select the most appropriate address from its routing table. It would have chosen by default the primary IP address on the involved interface (gre1) but as this interface has no IP address set, it will follow some algorithm and get instead 88.198.49.xxx (maybe because it's the local tunnel endpoint address? Doesn't matter).
Normally to change this behavior, one hints the route with an src parameter, as documented:
srcADDRESS
the source address to prefer when sending to the destinations covered
by the route prefix.
So replacing the default route in table 666 from:
ip route add default dev gre1 table 666
to:
ip route add default dev gre1 src 164.132.xxx.1 table 666
would work ... except table 666 won't be used when emitting a packet. Adding ip rule iif gre1 table 666 won't be used either to emit, nor ip rule add oif gre1 table 666 which is used to emit when binding to an interface.
So this would require a global behavior in the main routing table, but this could lead to issues at Hetzner when an IP address belonging to OVH is leaked and detected. Not good either.
The easiest I could find instead is to mark packets arriving through gre1 and, to avoid using any heavy conntrack feature, use instead the global fwmark_reflect sysctl toggle:
fwmark_reflect - BOOLEAN
Controls the fwmark of kernel-generated IPv4 reply packets that are
not associated with a socket for example, TCP RSTs or ICMP echo
replies). If unset, these packets have a fwmark of zero. If set, they
have the fwmark of the packet they are replying to.
As this event still happens at the routing decision step, the mark has to be set before routing happens.
So in addition to the src 164.132.xxx.1 change above, also do:
iptables -t mangle -A PREROUTING -i gre1 -j MARK --set-mark 666
ip rule add fwmark 666 lookup 666
sysctl -w net.ipv4.fwmark_reflect=1
The Hetzner IP address won't be leaked anymore.
| GRE IP to virtual machine (Proxmox) - Traceroute showing full route |
1,466,655,932,000 |
I'm building a captive portal (yeah, just-another ;) )
and now I'm trying to handle the core feature, the iptables rules.
Based on ipset I have a list of valid mac-addresses with name allow-mac.
So this is the current config (stripped to the problem itself):
echo 1 >/proc/sys/net/ipv4/ip_forward
ipset create allow-mac hash:mac counters
ipset add allow-mac XX:XX:XX:XX:XX:XX
IPT="/usr/sbin/iptables"
WAN="eth0"
LAN="eth1"
$IPT -P FORWARD DROP
$IPT -t nat -A POSTROUTING -o $WAN -j MASQUERADE
$IPT -I FORWARD -i $LAN -m set --match-set allow-mac src -j ACCEPT
This should work but it didn't! so, if I change the default FORWARD chain to ACCEPT and change the rule to the inverse:
$IPT -P FORWARD ACCEPT
$IPT -I FORWARD -i $LAN -m set ! --match-set allow-mac src -j DROP
I have the desired result, and only clients with known MAC-address in list can forward.
So my question, why is it not working in the first setup? And my second missing feature is, if the counters module is already added, but now the "upload" traffic from client is counted, how can (in a separated counter) I also count the download traffic as well?
|
In the first ruleset, you only allow outgoing traffic as you specified -i $LAN: so the reply will be filtered out. It will probably work simply by removing -i $LAN` ?
But in this case the whole traffic will be counted (upload + download)
If you want to count separately upload and download, you'll probably have to create two marking policy:
one for the upload, where src mac is marked
one for the download, where dst mac is marked.
| iptables - allow forward rules by set |
1,466,655,932,000 |
There is the requirement to set up a stateless NAT for two UDP connections from a physical network adapter in global network namespace via a linked pair of virtual network adapters to a service running in a special network namespace. This should be done on a CPU (Intel Atom) in an industrial device running Linux (Debian) with kernel 5.9.7.
Here is a scheme of the network configuration which should be set up:
===================== =====================================================
|| application CPU || || communication CPU ||
|| || || ||
|| || || global namespace | nsprot1 namespace ||
|| || || | ||
|| enp4s0 || || enp1s0 | enp3s0 ||
|| 0.0.0.5/30 ========== 0.0.0.6/30 | 192.168.2.15/24 =======
|| || || | ||
|| UDP port 50001 || || UDP port 50001 for sv1 | TCP port 2404 for sv2 ||
|| UDP port 50002 || || UDP port 50002 for sv1 | ||
|| UDP port 53401 || || UDP port 50401 for sv1 | ||
|| UDP port 53402 || || UDP port 50402 for sv1 | ||
|| || || | ||
|| || || vprot0 | vprot1 ||
|| || || 0.0.0.16/31 --- 0.0.0.17/31 ||
|| || || | ||
|| UDP port 53404 || || UDP port 50404 for sv2 - UDP port 50404 for sv2 ||
|| UDP port 53441 || || UDP port 50441 for sv2 - UDP port 50441 for sv2 ||
===================== =====================================================
The application CPU always starts first and opens several UDP ports for communication with service sv1 and service sv2 on the communication CPU via its physical network adapter enp4s0 with the IP address 0.0.0.5.
The output of ss --ipv4 --all --numeric --processes --udp executed on application CPU is:
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
udp UNCONN 0 0 0.0.0.0:50001 0.0.0.0:* users:(("sva",pid=471,fd=5))
udp UNCONN 0 0 0.0.0.0:50002 0.0.0.0:* users:(("sva",pid=471,fd=6))
udp ESTAB 0 0 0.0.0.5:53401 0.0.0.6:50401 users:(("sva",pid=471,fd=12))
udp ESTAB 0 0 0.0.0.5:53402 0.0.0.6:50402 users:(("sva",pid=471,fd=13))
udp ESTAB 0 0 0.0.0.5:53404 0.0.0.6:50404 users:(("sva",pid=471,fd=19))
udp ESTAB 0 0 0.0.0.5:53441 0.0.0.6:50441 users:(("sva",pid=471,fd=21))
The communication CPU starts second and has finally two services running:
sv1 in global namespace and
sv2 in special network namespace nsprot1.
The output of ss --ipv4 --all --numeric --processes --udp executed in global namespace of the communication CPU is:
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
udp UNCONN 0 0 0.0.0.0:50001 0.0.0.0:* users:(("sv1",pid=812,fd=18))
udp UNCONN 0 0 0.0.0.6:50002 0.0.0.0:* users:(("sv1",pid=812,fd=17))
udp UNCONN 0 0 0.0.0.6:50401 0.0.0.0:* users:(("sv1",pid=812,fd=13))
udp UNCONN 0 0 0.0.0.6:50402 0.0.0.0:* users:(("sv1",pid=812,fd=15))
The output of ip netns exec nsprot1 ss --ipv4 --all --numeric --processes --udp (nsprot1 namespace) is:
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
udp ESTAB 0 0 0.0.0.17:50404 0.0.0.5:53404 users:(("sv2",pid=2421,fd=11))
udp ESTAB 0 0 0.0.0.17:50441 0.0.0.5:53441 users:(("sv2",pid=2421,fd=12))
Forwarding for IPv4 is enabled in sysctl in general and for all physical network adapters.
Just broadcast and multicast forwarding is disabled as not needed and not wanted.
The network configuration is set up on communication CPU with the following commands:
ip netns add nsprot1
ip link add vprot0 type veth peer name vprot1 netns nsprot1
ip link set dev enp3s0 netns nsprot1
ip address add 0.0.0.16/31 dev vprot0
ip netns exec nsprot1 ip address add 0.0.0.17/31 dev vprot1
ip netns exec nsprot1 ip address add 192.168.2.15/24 dev enp3s0
ip link set dev vprot0 up
ip netns exec nsprot1 ip link set vprot1 up
ip netns exec nsprot1 ip link set enp3s0 up
ip netns exec nsprot1 ip route add 0.0.0.4/30 via 0.0.0.16 dev vprot1
The network address translation is set up with the following commands:
nft add table ip prot1
nft add chain ip prot1 prerouting '{ type nat hook prerouting priority -100; policy accept; }'
nft add rule prot1 prerouting iif enp1s0 udp dport '{ 50404, 50441 }' dnat 0.0.0.17
nft add chain ip prot1 postrouting '{ type nat hook postrouting priority 100; policy accept; }'
nft add rule prot1 postrouting ip saddr 0.0.0.16/31 oif enp1s0 snat 0.0.0.6
The output of nft list table ip prot1 is:
table ip prot1 {
chain prerouting {
type nat hook prerouting priority -100; policy accept;
iif "enp1s0" udp dport { 50404, 50441 } dnat to 0.0.0.17
}
chain postrouting {
type nat hook postrouting priority 100; policy accept;
ip saddr 0.0.0.16/31 oif "enp1s0" snat to 0.0.0.6
}
}
There is defined additionally in global namespace only the table inet filter with:
table inet filter {
chain input {
type filter hook input priority 0; policy accept;
}
chain forward {
type filter hook forward priority 0; policy accept;
}
chain output {
type filter hook output priority 0; policy accept;
}
}
That NAT configuration is for a stateful NAT. It works for the UDP channel with the port numbers 50404 and 53404 because of sv2 started last opens 0.0.0.17:50404 and sends a UDP packet to 0.0.0.5:53404 on which source network address translation is applied in postrouting hook for enp1s0 in global namespace. The service sva of application CPU sends back a UDP packet from 0.0.0.5:53404 to 0.0.0.6:50404 which reaches 0.0.0.17:50404. The UDP packet does not pass the prerouting rule for dnat to 0.0.0.17. It is send directly via connection tracking to 0.0.0.17 as I found out later.
But this stateful NAT configuration does not work for the UDP channel with the port numbers 50441 and 534441. It looks like the reason is that sva of application CPU sends several UDP packets already from 0.0.0.5:53441 to 0.0.0.6:50441 before the service sv2 is started at all and the destination port is opened in network namespace nsprot1. There is returned by ICMP that the destination port is unreachable. That is no surprise on taking into account that the destination port is not yet opened at all. It is unfortunately not possible to block the UDP packet sends in service sva until service sv2 is started and opened the two UDP ports. Service sva sends periodically and sometimes additionally triggered spontaneous UDP packets from 0.0.0.5:53441 to 0.0.0.6:50441 independent on connection state.
So the problem with this configuration seems to be the stateful NAT as the dnat rule in prerouting hook is still not used on destination port finally opened in network namespace nsprot1. There is still continued to route the UDP packets to 0.0.0.6:50441 which results in dropping the UDP packet and returning with ICMP that the destination port is not reachable.
Therefore the solution is maybe the usage of a stateless NAT. So there are executed additionally the commands:
nft add table ip raw
nft add chain ip raw prerouting '{ type filter hook prerouting priority -300; policy accept; }'
nft add rule ip raw prerouting udp dport '{ 50404, 50441, 53404, 53441 }' notrack
But the result was not as expected. The prerouting rule to change the destination address from 0.0.0.6 to 0.0.0.17 for UDP packets from input interface enp1s0 with destination port 50404 and 50441 is still not taken into account.
There was executed next by me:
nft add table ip filter
nft add chain filter trace_in '{ type filter hook prerouting priority -301; }'
nft add rule filter trace_in meta nftrace set 1
nft add chain filter trace_out '{ type filter hook postrouting priority 99; }'
nft add rule filter trace_out meta nftrace set 1
nft monitor trace
I looked on the trace and could see that the notrack rule is taken into account, but then the UDP packets with destination port 50441 are passed directly to the input hook. I don't know why.
I studied many, many hours very carefully following pages:
nft manual (read several times completely from top to bottom)
nftables wiki (most pages completely)
nftables on ArchWiki
and many, many other web pages regarding to usage of network namespaces and network address translation.
I tried really many different configurations, used Wireshark, used nft monitor trace, but I cannot find out a solution which works for the UDP channel with the ports 50441 and 53441 on sva sending UDP packets already before destination port 0.0.0.17:50441 is opened at all.
The stateful NAT configuration works if I manually terminate on application CPU the service sva, set up the network configuration on communication CPU with starting the two services sv1 and sv2 and start last manually the service sva again on all UDP ports already opened on communication CPU. But this order of starting the services cannot be done in the industrial device by default. The application service sva must run independent on communication services are ready for communication or not.
Which commands (chains/rules) are necessary to have a stateless NAT for the two UDP channels 0.0.0.5:53404 - 0.0.0.17:50404 and 0.0.0.5:53441 - 0.0.0.17:50441 independent on the open states of the destination ports and which service sends first an UDP packet to the other service?
PS: The service sv2 can be started depending on configuration of the device also in global namespace using a different physical network adapter on which no NAT and network namespace are necessary. In this network configuration there is absolutely no problem with the UDP communication between the three services.
|
I found the solution by myself finally after many, many hours of reading documentations, tutorials, suggestions on various web pages, making lots of trials, and doing deep and comprehensive network and netfilter monitorings and analyzes.
nft add table ip prot1
nft add chain ip prot1 prerouting '{ type filter hook prerouting priority -300; policy accept; }'
nft add rule ip prot1 prerouting iif enp1s0 udp dport '{ 50404, 50441 }' ip daddr set 0.0.0.17 notrack accept
nft add rule ip prot1 prerouting iif vprot0 ip saddr 0.0.0.17 notrack accept
nft add chain ip prot1 postrouting '{ type filter hook postrouting priority 100; policy accept; }'
nft add rule ip prot1 postrouting oif enp1s0 ip saddr 0.0.0.17 ip saddr set 0.0.0.6 accept
The netfilter hooks page should be opened and read first to understand the following explanation.
Explanation for the used commands:
A netfilter table is added for protocol ip (IPv4) with name prot1.
A chain is added to table prot1 with name prerouting of type filter for the hook prerouting with priority -300. It is important to use a priority number lower than -200 to be able to bypass the connection tracking conntrack. That excludes the usage of a chain of type nat for the destination network address translation as having an even lower priority.
A filter rule is added to table prot1 to chain prerouting which is applied only on IPv4 packets received on input interface enp1s0 of protocol type udp having as destination port either 50404 or 50441 which modifies the ip destination address of the packet from 0.0.0.6 to 0.0.0.17 and activates no tracking of the connection for this UDP packet. The verdict is specified explicitly with accept although not really necessary to pass the UDP packet received from the service sva of application CPU for the service sv2 of communication CPU as fast as possible to the next hook which is in this case the forward hook.
A second filter rule is added to table prot1 to chain prerouting which is applied only on all IPv4 packets received on input interface vprot0 independent on protocol type (udp, icmp, ...) having the ip source address 0.0.0.17 to activate no tracking of the connection for this packet. It would be of course also possible to filter just on UDP packets with appropriate source or destination port number, but this additional limitation is not needed here and this rule is also good for ICMP packets send back from 0.0.0.17 to 0.0.0.5 on destination port not yet opened because of the service sv2 is not running at the moment. The verdict is again specified explicitly with accept instead of using the implicit default continue to pass the packet as fast as possible to the forward hook.
A second chain is added to table prot1 with name postrouting of type filter for the hook postrouting with priority 100. It is important to use a chain of type filter and not of type nat to be able to apply a source address translation on the UDP (and ICMP) packets which bypassed the connection tracking.
A filter rule is added to table prot1 to second chain postrouting which is applied only on IPv4 packets sent on output interface enp1s0 independent on protocol type (udp, icmp, ...) having as source address 0.0.0.17 which modifies the ip source address of the packet from 0.0.0.17 to 0.0.0.6. The verdict is specified once more explicitly with accept although not really necessary to pass the UDP packet received from the service sv2 of the communication CPU to the service sva of the application CPU as fast as possible. This rule changes also the source address to 0.0.0.6 of the ICMP packet sent from 0.0.0.17 on destination port not reachable because of the service sv2 is not yet running. So the application CPU never notices that it communicates for two UDP channels with a different interface than 0.0.0.6 which was a second requirement to fulfill although being not really important.
It was a hard work to find out that a stateless network translation was needed for this very special network configuration and kind of communication between the services sva and sv2 and that the NAT must be done without using the nat hook.
| How to set up stateless NAT for two UDP connections from a global network to special network namespace? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.